mirror of
https://github.com/Zygo/bees.git
synced 2025-07-01 08:12:27 +02:00
b1bd99c077a1e121fb3989b5655eba3387b07f5b
During the search, the region between `upper_bound` and `target_pos` should contain no data items. The search lowers `upper_bound` and raises `lower_bound` until they both point to the last item before `target_pos`. The `lower_bound` is increased to the position of the last item returned by a search (`high_pos`) when that item is lower than `target_pos`. This avoids some loop iterations compared to a strict binary search algorithm, which would increase `lower_bound` only as far as `probe_pos`. When the search runs over live extent items, occasionally a new extent will appear between `upper_bound` and `target_pos`. When this happens, `lower_bound` is bumped up to the position of one of the new items, but that position is in the "unoccupied" space between `upper_bound` and `target_pos`, where no items are supposed to exist, so `seek_backward` throws an exception. To cut down on the noise, only increase `lower_bound` as far as `upper_bound`. This avoids the exception without increasing the number of loop iterations for normal cases. In the exceptional cases, extra loop iterations are needed to skip over the new items. This raises the worst-case number of loop iterations by one. Signed-off-by: Zygo Blaxell <bees@furryterror.org>
BEES
Best-Effort Extent-Same, a btrfs deduplication agent.
About bees
bees is a block-oriented userspace deduplication agent designed to scale up to large btrfs filesystems. It is an offline dedupe combined with an incremental data scan capability to minimize time data spends on disk from write to dedupe.
Strengths
- Space-efficient hash table - can use as little as 1 GB hash table per 10 TB unique data (0.1GB/TB)
- Daemon mode - incrementally dedupes new data as it appears
- Largest extents first - recover more free space during fixed maintenance windows
- Works with btrfs compression - dedupe any combination of compressed and uncompressed files
- Whole-filesystem dedupe - scans data only once, even with snapshots and reflinks
- Persistent hash table for rapid restart after shutdown
- Constant hash table size - no increased RAM usage if data set becomes larger
- Works on live data - no scheduled downtime required
- Automatic self-throttling - reduces system load
- btrfs support - recovers more free space from btrfs than naive dedupers
Weaknesses
- Whole-filesystem dedupe - has no include/exclude filters, does not accept file lists
- Requires root privilege (
CAP_SYS_ADMIN
plus the usual filesystem read/modify caps) - First run may increase metadata space usage if many snapshots exist
- Constant hash table size - no decreased RAM usage if data set becomes smaller
- btrfs only
Installation and Usage
Recommended Reading
- bees Gotchas
- btrfs kernel bugs - especially DATA CORRUPTION WARNING for old kernels
- bees vs. other btrfs features
- What to do when something goes wrong
More Information
Bug Reports and Contributions
Email bug reports and patches to Zygo Blaxell bees@furryterror.org.
You can also use Github:
https://github.com/Zygo/bees
Copyright & License
Copyright 2015-2025 Zygo Blaxell bees@furryterror.org.
GPL (version 3 or later).
Languages
C++
97%
C
1.6%
Makefile
0.8%
Shell
0.6%