mirror of
https://github.com/Zygo/bees.git
synced 2025-05-17 13:25:45 +02:00
We are getting a lot of exceptions when an inline extent is too large for the TREE_SEARCH_V2 buffer. This disrupts ExtentWalker's extent boundary search when there is an inline extent at the beginning of a file: # fiemap foo Log 0x0..0x1000 Phy 0x0..0x1000 Flags FIEMAP_EXTENT_NOT_ALIGNED|FIEMAP_EXTENT_DATA_INLINE Log 0x1000..0x2000 Phy 0x7307f9000..0x7307fa000 Flags 0 Log 0x2000..0x3000 Phy 0x731078000..0x731079000 Flags 0 Log 0x3000..0x5000 Phy 0x73127d000..0x73127f000 Flags FIEMAP_EXTENT_ENCODED Log 0x5000..0x6000 Phy 0x73137a000..0x73137b000 Flags 0 Log 0x6000..0x7000 Phy 0x731683000..0x731684000 Flags 0 Log 0x7000..0x8000 Phy 0x73224f000..0x732250000 Flags 0 Log 0x8000..0x9000 Phy 0x7323c9000..0x7323ca000 Flags 0 Log 0x9000..0xb000 Phy 0x732425000..0x732427000 Flags FIEMAP_EXTENT_ENCODED Log 0xb000..0xc000 Phy 0x732598000..0x732599000 Flags 0 Log 0xc000..0xd000 Phy 0x7325d5000..0x7325d6000 Flags FIEMAP_EXTENT_LAST # fiewalk foo exception type std::system_error: BTRFS_IOC_TREE_SEARCH_V2: /tmp/foo at fs.cc:844: Value too large for defined data type Normally crawlers simply skip over inline extents, but ExtentWalker will seek backward from the first non-inline extent to confirm that it has an accurate starting block for the target extent. This fails when it encounters the first inline extent. strace reveals that buffer size is too small for the first extent, as seen here: ioctl(3, BTRFS_IOC_TREE_SEARCH_V2, {key={tree_id=258, min_objectid=78897856, max_objectid=UINT64_MAX, min_offset=0, max_offset=UINT64_MAX, min_transid=0, max_transid=UINT64_MAX, min_type=BTRFS_EXTENT_DATA_KEY, max_type=BTRFS_EXTENT_DATA_KEY, nr_items=16}, buf_size=1360} => {buf_size=1418}) = -1 EOVERFLOW (Value too large for defined data type) Fix this by increasing the buffer size until it can handle the largest possible object on the largest possible btrfs metadata page (65536 bytes). BtrfsExtentWalker already has optimizations to minimize the allocation cost, so we don't need any changes there. Signed-off-by: Zygo Blaxell <bees@furryterror.org>
BEES
Best-Effort Extent-Same, a btrfs deduplication agent.
About bees
bees is a block-oriented userspace deduplication agent designed for large btrfs filesystems. It is an offline dedupe combined with an incremental data scan capability to minimize time data spends on disk from write to dedupe.
Strengths
- Space-efficient hash table and matching algorithms - can use as little as 1 GB hash table per 10 TB unique data (0.1GB/TB)
- Daemon incrementally dedupes new data using btrfs tree search
- Works with btrfs compression - dedupe any combination of compressed and uncompressed files
- NEW Works around
btrfs send
problems with dedupe and incremental parent shapshots - Works around btrfs filesystem structure to free more disk space
- Persistent hash table for rapid restart after shutdown
- Whole-filesystem dedupe - including snapshots
- Constant hash table size - no increased RAM usage if data set becomes larger
- Works on live data - no scheduled downtime required
- Automatic self-throttling based on system load
Weaknesses
- Whole-filesystem dedupe - has no include/exclude filters, does not accept file lists
- Requires root privilege (or
CAP_SYS_ADMIN
) - First run may require temporary disk space for extent reorganization
- First run may increase metadata space usage if many snapshots exist
- Constant hash table size - no decreased RAM usage if data set becomes smaller
- btrfs only
Installation and Usage
Recommended Reading
- bees Gotchas
- btrfs kernel bugs - especially DATA CORRUPTION WARNING
- bees vs. other btrfs features
- What to do when something goes wrong
More Information
Bug Reports and Contributions
Email bug reports and patches to Zygo Blaxell bees@furryterror.org.
You can also use Github:
https://github.com/Zygo/bees
Copyright & License
Copyright 2015-2018 Zygo Blaxell bees@furryterror.org.
GPL (version 3 or later).
Languages
C++
97%
C
1.6%
Makefile
0.8%
Shell
0.6%