1
0
mirror of https://github.com/Zygo/bees.git synced 2025-05-17 21:35:45 +02:00
Zygo Blaxell 373b9ef038 roots: fix subvol scan rollover on subvols with empty transid range
The ordering function for BeesCrawlState did not consider

	root 292 inode 0 min_transid 2345 max_transid 3456

to be larger than

	root 292 inode 258 min_transid 2345 max_transid 2345

so when we attempted to update the end pointer for the crawl progress,
the new state was not considered newer than the old state because the
min_transid was equal, but the new crawl state's inode number was smaller.

Normally this is not a problem because subvol scans typically begin
and end in separate transactions (in part because we don't start a
subvol scan until at least two transactions are available); however,
the cleanup code for the aftermath of the recent transid_min() bug can
create crawlers with equal max_transid and min_transid records.

Fix this by ordering both transid fields before any others in the
crawl state.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-10-30 21:12:14 -04:00
2016-11-17 12:12:15 -05:00

BEES

Best-Effort Extent-Same, a btrfs deduplication agent.

About bees

bees is a block-oriented userspace deduplication agent designed for large btrfs filesystems. It is an offline dedupe combined with an incremental data scan capability to minimize time data spends on disk from write to dedupe.

Strengths

  • Space-efficient hash table and matching algorithms - can use as little as 1 GB hash table per 10 TB unique data (0.1GB/TB)
  • Incremental realtime dedupe of new data using btrfs tree search
  • Works with btrfs compression - dedupe any combination of compressed and uncompressed files
  • Works around btrfs filesystem structure to free more disk space
  • Persistent hash table for rapid restart after shutdown
  • Whole-filesystem dedupe - including snapshots
  • Constant hash table size - no increased RAM usage if data set becomes larger
  • Works on live data - no scheduled downtime required
  • Automatic self-throttling based on system load

Weaknesses

  • Whole-filesystem dedupe - has no include/exclude filters, does not accept file lists
  • Runs continuously as a daemon - no quick start/stop
  • Requires root privilege (or CAP_SYS_ADMIN)
  • First run may require temporary disk space for extent reorganization
  • First run may increase metadata space usage if many snapshots exist
  • Constant hash table size - no decreased RAM usage if data set becomes smaller
  • btrfs only

Installation and Usage

More Information

Bug Reports and Contributions

Email bug reports and patches to Zygo Blaxell bees@furryterror.org.

You can also use Github:

    https://github.com/Zygo/bees

Copyright 2015-2018 Zygo Blaxell bees@furryterror.org.

GPL (version 3 or later).

Description
Best-Effort Extent-Same, a btrfs dedupe agent
Readme 1.7 MiB
Languages
C++ 97%
C 1.6%
Makefile 0.8%
Shell 0.6%