1
0
mirror of https://github.com/Zygo/bees.git synced 2025-05-17 13:25:45 +02:00
Zygo Blaxell 5fe89d85c3 extent scan: make sure we run every extent crawler once per transaction
There's a pathological case where all of the extent scan crawlers except
one are at the end of a crawl cycle, but the one crawler that is still
running is keeping the Task queue full.  The result is that bees never
starts the other extent scan crawlers, because the queue is always
full at the instant a new transid triggers the start of a new scan.
That's bad because it will result in bees falling behind when new data
from the inactive size tiers appears.

To fix this, check for throttling _after_ creating at least one scan task
in each crawler.  That will keep the crawlers running, and possibly allow
them to claw back some space in the Task queue.  It slightly overcommits
the Task queue, so there will be a few more Tasks than nominally allowed.

Also (re)introduce some hysteresis in the queue size limit and reduce it
a little, so that bees isn't continually stopping and restarting crawls
every time one task is created or completed, and so that we stay under
the configured Task limit despite overcommitting.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2025-01-19 22:19:42 -05:00
2022-12-23 00:26:33 -05:00
2016-11-17 12:12:15 -05:00
2025-01-11 23:39:55 -05:00

BEES

Best-Effort Extent-Same, a btrfs deduplication agent.

About bees

bees is a block-oriented userspace deduplication agent designed to scale up to large btrfs filesystems. It is an offline dedupe combined with an incremental data scan capability to minimize time data spends on disk from write to dedupe.

Strengths

  • Space-efficient hash table - can use as little as 1 GB hash table per 10 TB unique data (0.1GB/TB)
  • Daemon mode - incrementally dedupes new data as it appears
  • Largest extents first - recover more free space during fixed maintenance windows
  • Works with btrfs compression - dedupe any combination of compressed and uncompressed files
  • Whole-filesystem dedupe - scans data only once, even with snapshots and reflinks
  • Persistent hash table for rapid restart after shutdown
  • Constant hash table size - no increased RAM usage if data set becomes larger
  • Works on live data - no scheduled downtime required
  • Automatic self-throttling - reduces system load
  • btrfs support - recovers more free space from btrfs than naive dedupers

Weaknesses

Installation and Usage

More Information

Bug Reports and Contributions

Email bug reports and patches to Zygo Blaxell bees@furryterror.org.

You can also use Github:

    https://github.com/Zygo/bees

Copyright 2015-2025 Zygo Blaxell bees@furryterror.org.

GPL (version 3 or later).

Description
Best-Effort Extent-Same, a btrfs dedupe agent
Readme 1.7 MiB
Languages
C++ 97%
C 1.6%
Makefile 0.8%
Shell 0.6%