mirror of
https://github.com/Zygo/bees.git
synced 2025-05-17 13:25:45 +02:00
Pool is a place to store shared_ptrs to generated objects (T) that are too expensive to create and destroy between individual uses, such as temporary files. Objects in a Pool have no distinct identity (contrast with Cache or NamedPtr). Users of the Pool invoke the Pool function call overload and "check out" a shared_ptr<T> for a T object from the Pool. When the last referencing shared_otr<T> is destroyed, the T object is "checked in" to the Pool. Each call of the Pool function overload checks out a shared_ptr<T> to a T object that is not currently referenced by any other public shared_ptr<T>. If there are no existing T objects in the Pool, a new T is constructed by calling the generator function. The clear() method destroys all checked in T objects owned by the Pool at the time the method is called. T objects that are checked out are not affected by clear(), and they will be stored in the Pool when they are checked in. If the checkout function is provided, it is called on a shared_ptr<T> during checkout, before returning to the caller. If the checkin function is provided, it is called on a shared_ptr<T> before returning it to the Pool. The checkin function must not throw exceptions. The Pool may be destroyed while T objects are checked out of the Pool. In that case, when the T objects are checked in, the T object is immediately destroyed without calling the checkin function. Signed-off-by: Zygo Blaxell <bees@furryterror.org>
BEES
Best-Effort Extent-Same, a btrfs deduplication agent.
About bees
bees is a block-oriented userspace deduplication agent designed for large btrfs filesystems. It is an offline dedupe combined with an incremental data scan capability to minimize time data spends on disk from write to dedupe.
Strengths
- Space-efficient hash table and matching algorithms - can use as little as 1 GB hash table per 10 TB unique data (0.1GB/TB)
- Daemon incrementally dedupes new data using btrfs tree search
- Works with btrfs compression - dedupe any combination of compressed and uncompressed files
- NEW Works around
btrfs send
problems with dedupe and incremental parent shapshots - Works around btrfs filesystem structure to free more disk space
- Persistent hash table for rapid restart after shutdown
- Whole-filesystem dedupe - including snapshots
- Constant hash table size - no increased RAM usage if data set becomes larger
- Works on live data - no scheduled downtime required
- Automatic self-throttling based on system load
Weaknesses
- Whole-filesystem dedupe - has no include/exclude filters, does not accept file lists
- Requires root privilege (or
CAP_SYS_ADMIN
) - First run may require temporary disk space for extent reorganization
- First run may increase metadata space usage if many snapshots exist
- Constant hash table size - no decreased RAM usage if data set becomes smaller
- btrfs only
Installation and Usage
Recommended Reading
- bees Gotchas
- btrfs kernel bugs - especially DATA CORRUPTION WARNING
- bees vs. other btrfs features
- What to do when something goes wrong
More Information
Bug Reports and Contributions
Email bug reports and patches to Zygo Blaxell bees@furryterror.org.
You can also use Github:
https://github.com/Zygo/bees
Copyright & License
Copyright 2015-2018 Zygo Blaxell bees@furryterror.org.
GPL (version 3 or later).
Languages
C++
97%
C
1.6%
Makefile
0.8%
Shell
0.6%