mirror of
https://github.com/Zygo/bees.git
synced 2025-05-17 13:25:45 +02:00
flushoncommit or not-flushoncommit isn't really a bees matter--it's a sysadmin's tradeoff between reliability and performance. bees does not affect that tradeoff because all dedupe src extents are flushed, so bees introduces no *new* data loss risks in the noflushoncommit case--i.e. any data that you could lose while running bees, you'd also lose when not running bees. Note that the converse is not true: bees might trigger flushing on data that would not normally have been flushed with noflushoncommit, and improve data integrity after a crash as a side-effect of dedupe operations. The risks of noflushoncommit might be reduced by running bees. I don't have evidence based on experimental data to support that conclusion, so I'll just leave this possibility as a rumor in a commit log message. lvmcache can be moved from the "bad" list to the "good" list now. bcache remains in the "bad" list due to some non-data-losing failures that only seem to happen with bcache. Add a note about CPUs with strange endianness or page sizes, as nobody seems to have tried those. Remove "at great cost" from the btrfs send workaround. The cost is the cost, there is no need to editorialize. Signed-off-by: Zygo Blaxell <bees@furryterror.org>
BEES
Best-Effort Extent-Same, a btrfs deduplication agent.
About bees
bees is a block-oriented userspace deduplication agent designed for large btrfs filesystems. It is an offline dedupe combined with an incremental data scan capability to minimize time data spends on disk from write to dedupe.
Strengths
- Space-efficient hash table and matching algorithms - can use as little as 1 GB hash table per 10 TB unique data (0.1GB/TB)
- Daemon incrementally dedupes new data using btrfs tree search
- Works with btrfs compression - dedupe any combination of compressed and uncompressed files
- NEW Works around
btrfs send
problems with dedupe and incremental parent shapshots - Works around btrfs filesystem structure to free more disk space
- Persistent hash table for rapid restart after shutdown
- Whole-filesystem dedupe - including snapshots
- Constant hash table size - no increased RAM usage if data set becomes larger
- Works on live data - no scheduled downtime required
- Automatic self-throttling based on system load
Weaknesses
- Whole-filesystem dedupe - has no include/exclude filters, does not accept file lists
- Requires root privilege (or
CAP_SYS_ADMIN
) - First run may require temporary disk space for extent reorganization
- First run may increase metadata space usage if many snapshots exist
- Constant hash table size - no decreased RAM usage if data set becomes smaller
- btrfs only
Installation and Usage
Recommended Reading
- bees Gotchas
- btrfs kernel bugs - especially DATA CORRUPTION WARNING
- bees vs. other btrfs features
- What to do when something goes wrong
More Information
Bug Reports and Contributions
Email bug reports and patches to Zygo Blaxell bees@furryterror.org.
You can also use Github:
https://github.com/Zygo/bees
Copyright & License
Copyright 2015-2018 Zygo Blaxell bees@furryterror.org.
GPL (version 3 or later).
Languages
C++
97%
C
1.6%
Makefile
0.8%
Shell
0.6%