From da3ef216b16a7103fcd84237ad7bbeb82d4c4faf Mon Sep 17 00:00:00 2001 From: Zygo Blaxell Date: Tue, 7 Mar 2023 10:20:16 -0500 Subject: [PATCH] docs: working around `btrfs send` issues isn't really a feature The critical kernel bugs in send have been fixed for years. The limitations that remain aren't bugs, and bees has no sustainable workaround for them. Also update copyright year range. Signed-off-by: Zygo Blaxell --- README.md | 3 +-- docs/index.md | 20 +++++++++----------- 2 files changed, 10 insertions(+), 13 deletions(-) diff --git a/README.md b/README.md index 9b9783b..a82cdce 100644 --- a/README.md +++ b/README.md @@ -17,7 +17,6 @@ Strengths * Space-efficient hash table and matching algorithms - can use as little as 1 GB hash table per 10 TB unique data (0.1GB/TB) * Daemon incrementally dedupes new data using btrfs tree search * Works with btrfs compression - dedupe any combination of compressed and uncompressed files - * **NEW** [Works around `btrfs send` problems with dedupe and incremental parent snapshots](docs/options.md) * Works around btrfs filesystem structure to free more disk space * Persistent hash table for rapid restart after shutdown * Whole-filesystem dedupe - including snapshots @@ -70,6 +69,6 @@ You can also use Github: Copyright & License ------------------- -Copyright 2015-2022 Zygo Blaxell . +Copyright 2015-2023 Zygo Blaxell . GPL (version 3 or later). diff --git a/docs/index.md b/docs/index.md index c607db7..4ce7579 100644 --- a/docs/index.md +++ b/docs/index.md @@ -6,11 +6,10 @@ Best-Effort Extent-Same, a btrfs deduplication agent. About bees ---------- -bees is a block-oriented userspace deduplication agent designed to scale -up to large btrfs filesystems. It is a daemon that performs offline -dedupe automatically as required. It uses an incremental data scan -capability to minimize memory usage and dedupe new data soon after it -appears in the filesystem. +bees is a block-oriented userspace deduplication agent designed for large +btrfs filesystems. It is an offline dedupe combined with an incremental +data scan capability to minimize time data spends on disk from write +to dedupe. Strengths --------- @@ -18,23 +17,22 @@ Strengths * Space-efficient hash table and matching algorithms - can use as little as 1 GB hash table per 10 TB unique data (0.1GB/TB) * Daemon incrementally dedupes new data using btrfs tree search * Works with btrfs compression - dedupe any combination of compressed and uncompressed files - * Works around btrfs filesystem structure issues to free more disk space than generic dedupe tools - * Persistent hash table and checkpoint for rapid restart after shutdown + * Works around btrfs filesystem structure to free more disk space + * Persistent hash table for rapid restart after shutdown * Whole-filesystem dedupe - including snapshots * Constant hash table size - no increased RAM usage if data set becomes larger * Works on live data - no scheduled downtime required * Automatic self-throttling based on system load - * Low memory footprint (excluding the hash table) Weaknesses ---------- - * Whole-filesystem dedupe - has no include/exclude filters, does not accept file lists, terminates only when explicitly requested - * Requires root privilege (or `CAP_SYS_ADMIN`) to work + * Whole-filesystem dedupe - has no include/exclude filters, does not accept file lists + * Requires root privilege (or `CAP_SYS_ADMIN`) * First run may require temporary disk space for extent reorganization * [First run may increase metadata space usage if many snapshots exist](gotchas.md) * Constant hash table size - no decreased RAM usage if data set becomes smaller - * btrfs only (bcachefs and xfs are missing various features) + * btrfs only Installation and Usage ----------------------