mirror of
https://github.com/Zygo/bees.git
synced 2025-07-12 13:12:26 +02:00
roots: separate crawl sizes into bytes and items
Number of items should be low enough that we don't have too many stale items, but high enough to amortize system call overhead to a reasonable ratio. Number of bytes should be constant: one worst-case metadata page (the btrfs limit is 64K, though 16K is much more common) so that we always have enough space for one worst-case item; otherwise, we get EOVERFLOW if we set the number of items too low and there's a big item in the tree, and we can't make further progress. Signed-off-by: Zygo Blaxell <bees@furryterror.org>
This commit is contained in:
@ -995,7 +995,7 @@ BeesCrawl::fetch_extents()
|
||||
|
||||
Timer crawl_timer;
|
||||
|
||||
BtrfsIoctlSearchKey sk(BEES_MAX_CRAWL_SIZE * (sizeof(btrfs_file_extent_item) + sizeof(btrfs_ioctl_search_header)));
|
||||
BtrfsIoctlSearchKey sk(BEES_MAX_CRAWL_BYTES);
|
||||
sk.tree_id = old_state.m_root;
|
||||
sk.min_objectid = old_state.m_objectid;
|
||||
sk.min_type = sk.max_type = BTRFS_EXTENT_DATA_KEY;
|
||||
@ -1006,7 +1006,7 @@ BeesCrawl::fetch_extents()
|
||||
// the filesystem while slowing us down.
|
||||
// sk.max_transid = old_state.m_max_transid;
|
||||
sk.max_transid = numeric_limits<uint64_t>::max();
|
||||
sk.nr_items = BEES_MAX_CRAWL_SIZE;
|
||||
sk.nr_items = BEES_MAX_CRAWL_ITEMS;
|
||||
|
||||
// Lock in the old state
|
||||
set_state(old_state);
|
||||
|
Reference in New Issue
Block a user