1
0
mirror of https://github.com/Zygo/bees.git synced 2025-05-17 21:35:45 +02:00
Zygo Blaxell cbc76a7457 hash: don't spin when writes fail
When a hash table write fails, we skip over the write throttling because
we didn't report that we successfully wrote an extent.  This can be bad
if the filesystem is full and the allocations for writes are burning a
lot of CPU time searching for free space.

We also don't retry the write later on since we assume the extent is
clean after a write attempt whether it was successful or not, so the
extent might not be written out later when writes are possible again.

Check whether a hash extent is dirty, and always throttle after
attempting the write.

If a write fails, leave the extent dirty so we attempt to write it out
the next time flush cycles through the hash table.  During shutdown
this will reattempt each failing write once, after that the updated hash
table data will be dropped.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2023-01-23 00:09:26 -05:00
2023-01-23 00:08:54 -05:00
2022-12-23 00:26:33 -05:00
2023-01-23 00:09:26 -05:00
2016-11-17 12:12:15 -05:00
2022-07-29 22:20:02 -04:00

BEES

Best-Effort Extent-Same, a btrfs deduplication agent.

About bees

bees is a block-oriented userspace deduplication agent designed for large btrfs filesystems. It is an offline dedupe combined with an incremental data scan capability to minimize time data spends on disk from write to dedupe.

Strengths

  • Space-efficient hash table and matching algorithms - can use as little as 1 GB hash table per 10 TB unique data (0.1GB/TB)
  • Daemon incrementally dedupes new data using btrfs tree search
  • Works with btrfs compression - dedupe any combination of compressed and uncompressed files
  • NEW Works around btrfs send problems with dedupe and incremental parent snapshots
  • Works around btrfs filesystem structure to free more disk space
  • Persistent hash table for rapid restart after shutdown
  • Whole-filesystem dedupe - including snapshots
  • Constant hash table size - no increased RAM usage if data set becomes larger
  • Works on live data - no scheduled downtime required
  • Automatic self-throttling based on system load

Weaknesses

  • Whole-filesystem dedupe - has no include/exclude filters, does not accept file lists
  • Requires root privilege (or CAP_SYS_ADMIN)
  • First run may require temporary disk space for extent reorganization
  • First run may increase metadata space usage if many snapshots exist
  • Constant hash table size - no decreased RAM usage if data set becomes smaller
  • btrfs only

Installation and Usage

More Information

Bug Reports and Contributions

Email bug reports and patches to Zygo Blaxell bees@furryterror.org.

You can also use Github:

    https://github.com/Zygo/bees

Copyright 2015-2022 Zygo Blaxell bees@furryterror.org.

GPL (version 3 or later).

Description
Best-Effort Extent-Same, a btrfs dedupe agent
Readme 1.7 MiB
Languages
C++ 97%
C 1.6%
Makefile 0.8%
Shell 0.6%