mirror of
https://github.com/Zygo/bees.git
synced 2025-05-17 13:25:45 +02:00
Tested on larger filesystems than 100T too, but let's use Fermi approximation. Next size is 1P. Removed interaction with block-level SSD caching subsystems. These are really btrfs metadata vs. a lower block layer, and have nothing to do with bees. Added mixed block groups to the tested list, as mixed block groups required explicit support in the extent scanner. Added btrfs-convert to the tested list. btrfs-convert has various problems with space allocation in general, but these can be solved by carefully ordered balances after conversion, and they have nothing to do with bees. In-kernel dedupe is dead and the stubs were removed years ago. Remove it from the list. btrfs send now plays nicely with bees on all supportable kernels, now that stable/linux-4.19.y is dead. Send workaround is only needed for kernels before v5.4 (technically v5.2, but nobody should ever mount a btrfs with kernel v5.1 to v5.3). bees will pause automatically when deduping a subvol that is currently running a send. bees will no longer gratuitously refragment data that was defragmented by autodefrag. Explicitly list all the RAID profiles tested so far, as there have been some new ones. Explicitly list other deduplicators tested. Sort the list of btrfs features alphabetically. Add scrub and balance, which have been tested with bees since the beginning. New tested btrfs features: block-group-tree, raid1c3, raid1c4. New untested btrfs features: squotas, raid-stripe-tree. Signed-off-by: Zygo Blaxell <bees@furryterror.org>
BEES
Best-Effort Extent-Same, a btrfs deduplication agent.
About bees
bees is a block-oriented userspace deduplication agent designed for large btrfs filesystems. It is an offline dedupe combined with an incremental data scan capability to minimize time data spends on disk from write to dedupe.
Strengths
- Space-efficient hash table and matching algorithms - can use as little as 1 GB hash table per 10 TB unique data (0.1GB/TB)
- Daemon incrementally dedupes new data using btrfs tree search
- Works with btrfs compression - dedupe any combination of compressed and uncompressed files
- Works around btrfs filesystem structure to free more disk space
- Persistent hash table for rapid restart after shutdown
- Whole-filesystem dedupe - including snapshots
- Constant hash table size - no increased RAM usage if data set becomes larger
- Works on live data - no scheduled downtime required
- Automatic self-throttling based on system load
Weaknesses
- Whole-filesystem dedupe - has no include/exclude filters, does not accept file lists
- Requires root privilege (or
CAP_SYS_ADMIN
) - First run may require temporary disk space for extent reorganization
- First run may increase metadata space usage if many snapshots exist
- Constant hash table size - no decreased RAM usage if data set becomes smaller
- btrfs only
Installation and Usage
Recommended Reading
- bees Gotchas
- btrfs kernel bugs - especially DATA CORRUPTION WARNING
- bees vs. other btrfs features
- What to do when something goes wrong
More Information
Bug Reports and Contributions
Email bug reports and patches to Zygo Blaxell bees@furryterror.org.
You can also use Github:
https://github.com/Zygo/bees
Copyright & License
Copyright 2015-2023 Zygo Blaxell bees@furryterror.org.
GPL (version 3 or later).
Languages
C++
97%
C
1.6%
Makefile
0.8%
Shell
0.6%