mirror of
https://github.com/Zygo/bees.git
synced 2025-05-17 13:25:45 +02:00
docs: update README.md
Emphasize "large" is an upper bound on the size of filesystem bees can handle. New strengths: largest extent first for fixed maintenance windows, scans data only once (ish), recovers more space Removed weaknesses: less temporary space Need more caps than `CAP_SYS_ADMIN`. Emphasize DATA CORRUPTION WARNING is an old-kernel thing. Update copyright year. Signed-off-by: Zygo Blaxell <bees@furryterror.org>
This commit is contained in:
parent
0d251d30f4
commit
46815f1a9d
26
README.md
26
README.md
@ -6,30 +6,30 @@ Best-Effort Extent-Same, a btrfs deduplication agent.
|
||||
About bees
|
||||
----------
|
||||
|
||||
bees is a block-oriented userspace deduplication agent designed for large
|
||||
btrfs filesystems. It is an offline dedupe combined with an incremental
|
||||
data scan capability to minimize time data spends on disk from write
|
||||
to dedupe.
|
||||
bees is a block-oriented userspace deduplication agent designed to scale
|
||||
up to large btrfs filesystems. It is an offline dedupe combined with
|
||||
an incremental data scan capability to minimize time data spends on disk
|
||||
from write to dedupe.
|
||||
|
||||
Strengths
|
||||
---------
|
||||
|
||||
* Space-efficient hash table and matching algorithms - can use as little as 1 GB hash table per 10 TB unique data (0.1GB/TB)
|
||||
* Daemon incrementally dedupes new data using btrfs tree search
|
||||
* Space-efficient hash table - can use as little as 1 GB hash table per 10 TB unique data (0.1GB/TB)
|
||||
* Daemon mode - incrementally dedupes new data as it appears
|
||||
* Largest extents first - recover more free space during fixed maintenance windows
|
||||
* Works with btrfs compression - dedupe any combination of compressed and uncompressed files
|
||||
* Works around btrfs filesystem structure to free more disk space
|
||||
* Whole-filesystem dedupe - scans data only once, even with snapshots and reflinks
|
||||
* Persistent hash table for rapid restart after shutdown
|
||||
* Whole-filesystem dedupe - including snapshots
|
||||
* Constant hash table size - no increased RAM usage if data set becomes larger
|
||||
* Works on live data - no scheduled downtime required
|
||||
* Automatic self-throttling based on system load
|
||||
* Automatic self-throttling - reduces system load
|
||||
* btrfs support - recovers more free space from btrfs than naive dedupers
|
||||
|
||||
Weaknesses
|
||||
----------
|
||||
|
||||
* Whole-filesystem dedupe - has no include/exclude filters, does not accept file lists
|
||||
* Requires root privilege (or `CAP_SYS_ADMIN`)
|
||||
* First run may require temporary disk space for extent reorganization
|
||||
* Requires root privilege (`CAP_SYS_ADMIN` plus the usual filesystem read/modify caps)
|
||||
* [First run may increase metadata space usage if many snapshots exist](docs/gotchas.md)
|
||||
* Constant hash table size - no decreased RAM usage if data set becomes smaller
|
||||
* btrfs only
|
||||
@ -46,7 +46,7 @@ Recommended Reading
|
||||
-------------------
|
||||
|
||||
* [bees Gotchas](docs/gotchas.md)
|
||||
* [btrfs kernel bugs](docs/btrfs-kernel.md) - especially DATA CORRUPTION WARNING
|
||||
* [btrfs kernel bugs](docs/btrfs-kernel.md) - especially DATA CORRUPTION WARNING for old kernels
|
||||
* [bees vs. other btrfs features](docs/btrfs-other.md)
|
||||
* [What to do when something goes wrong](docs/wrong.md)
|
||||
|
||||
@ -69,6 +69,6 @@ You can also use Github:
|
||||
Copyright & License
|
||||
-------------------
|
||||
|
||||
Copyright 2015-2023 Zygo Blaxell <bees@furryterror.org>.
|
||||
Copyright 2015-2025 Zygo Blaxell <bees@furryterror.org>.
|
||||
|
||||
GPL (version 3 or later).
|
||||
|
@ -6,30 +6,30 @@ Best-Effort Extent-Same, a btrfs deduplication agent.
|
||||
About bees
|
||||
----------
|
||||
|
||||
bees is a block-oriented userspace deduplication agent designed for large
|
||||
btrfs filesystems. It is an offline dedupe combined with an incremental
|
||||
data scan capability to minimize time data spends on disk from write
|
||||
to dedupe.
|
||||
bees is a block-oriented userspace deduplication agent designed to scale
|
||||
up to large btrfs filesystems. It is an offline dedupe combined with
|
||||
an incremental data scan capability to minimize time data spends on disk
|
||||
from write to dedupe.
|
||||
|
||||
Strengths
|
||||
---------
|
||||
|
||||
* Space-efficient hash table and matching algorithms - can use as little as 1 GB hash table per 10 TB unique data (0.1GB/TB)
|
||||
* Daemon incrementally dedupes new data using btrfs tree search
|
||||
* Space-efficient hash table - can use as little as 1 GB hash table per 10 TB unique data (0.1GB/TB)
|
||||
* Daemon mode - incrementally dedupes new data as it appears
|
||||
* Largest extents first - recover more free space during fixed maintenance windows
|
||||
* Works with btrfs compression - dedupe any combination of compressed and uncompressed files
|
||||
* Works around btrfs filesystem structure to free more disk space
|
||||
* Whole-filesystem dedupe - scans data only once, even with snapshots and reflinks
|
||||
* Persistent hash table for rapid restart after shutdown
|
||||
* Whole-filesystem dedupe - including snapshots
|
||||
* Constant hash table size - no increased RAM usage if data set becomes larger
|
||||
* Works on live data - no scheduled downtime required
|
||||
* Automatic self-throttling based on system load
|
||||
* Automatic self-throttling - reduces system load
|
||||
* btrfs support - recovers more free space from btrfs than naive dedupers
|
||||
|
||||
Weaknesses
|
||||
----------
|
||||
|
||||
* Whole-filesystem dedupe - has no include/exclude filters, does not accept file lists
|
||||
* Requires root privilege (or `CAP_SYS_ADMIN`)
|
||||
* First run may require temporary disk space for extent reorganization
|
||||
* Requires root privilege (`CAP_SYS_ADMIN` plus the usual filesystem read/modify caps)
|
||||
* [First run may increase metadata space usage if many snapshots exist](gotchas.md)
|
||||
* Constant hash table size - no decreased RAM usage if data set becomes smaller
|
||||
* btrfs only
|
||||
@ -46,7 +46,7 @@ Recommended Reading
|
||||
-------------------
|
||||
|
||||
* [bees Gotchas](gotchas.md)
|
||||
* [btrfs kernel bugs](btrfs-kernel.md) - especially DATA CORRUPTION WARNING
|
||||
* [btrfs kernel bugs](btrfs-kernel.md) - especially DATA CORRUPTION WARNING for old kernels
|
||||
* [bees vs. other btrfs features](btrfs-other.md)
|
||||
* [What to do when something goes wrong](wrong.md)
|
||||
|
||||
@ -69,6 +69,6 @@ You can also use Github:
|
||||
Copyright & License
|
||||
-------------------
|
||||
|
||||
Copyright 2015-2023 Zygo Blaxell <bees@furryterror.org>.
|
||||
Copyright 2015-2025 Zygo Blaxell <bees@furryterror.org>.
|
||||
|
||||
GPL (version 3 or later).
|
||||
|
Loading…
x
Reference in New Issue
Block a user