mirror of
https://github.com/Zygo/bees.git
synced 2025-05-17 21:35:45 +02:00
Split each scan mode into two distinct phases: 1. A heavy discovery phase, where we search the entire filesystem for something (new items in subvol trees in this case). 2. A light consuming phase, where we fetch extents to dedupe from places that we found in the discovery phase. Part 1 recomputes the subvol ordering every time there is a new transid. For some scan modes this computation is quite expensive, far too costly to pay for every extent, so we do it no more than once per transaction. Part 2 is run every time a worker thread hits the crawl_more Task. It simply pulls one extent from the first crawler off a sorted list, removing the crawler from the list when the crawler runs out of data. Part 1 creates a new structure and swaps it into place, while Part 2 continues to run using the previous strucuture. Neither of these need to block the other, so they don't. The separate class and base pointer also make it easer to add new scan modes that are not based on subvol trees or that don't use BeesCrawl. While we're here, fix up some method visibility in BeesRoots. Signed-off-by: Zygo Blaxell <bees@furryterror.org>
BEES
Best-Effort Extent-Same, a btrfs deduplication agent.
About bees
bees is a block-oriented userspace deduplication agent designed for large btrfs filesystems. It is an offline dedupe combined with an incremental data scan capability to minimize time data spends on disk from write to dedupe.
Strengths
- Space-efficient hash table and matching algorithms - can use as little as 1 GB hash table per 10 TB unique data (0.1GB/TB)
- Daemon incrementally dedupes new data using btrfs tree search
- Works with btrfs compression - dedupe any combination of compressed and uncompressed files
- NEW Works around
btrfs send
problems with dedupe and incremental parent snapshots - Works around btrfs filesystem structure to free more disk space
- Persistent hash table for rapid restart after shutdown
- Whole-filesystem dedupe - including snapshots
- Constant hash table size - no increased RAM usage if data set becomes larger
- Works on live data - no scheduled downtime required
- Automatic self-throttling based on system load
Weaknesses
- Whole-filesystem dedupe - has no include/exclude filters, does not accept file lists
- Requires root privilege (or
CAP_SYS_ADMIN
) - First run may require temporary disk space for extent reorganization
- First run may increase metadata space usage if many snapshots exist
- Constant hash table size - no decreased RAM usage if data set becomes smaller
- btrfs only
Installation and Usage
Recommended Reading
- bees Gotchas
- btrfs kernel bugs - especially DATA CORRUPTION WARNING
- bees vs. other btrfs features
- What to do when something goes wrong
More Information
Bug Reports and Contributions
Email bug reports and patches to Zygo Blaxell bees@furryterror.org.
You can also use Github:
https://github.com/Zygo/bees
Copyright & License
Copyright 2015-2022 Zygo Blaxell bees@furryterror.org.
GPL (version 3 or later).
Languages
C++
97%
C
1.6%
Makefile
0.8%
Shell
0.6%