mirror of
https://github.com/Zygo/bees.git
synced 2025-06-15 17:26:15 +02:00
context: don't let multiple worker Tasks get stuck on a single extent or inode
When two Tasks attempt to lock the same extent, append the later Task to the earlier Task's post-exec work queue. This will guarantee that all Tasks which attempt to manipulate the same extent will execute sequentially, and free up threads to process other extents. Similarly, if two scanner threads operate on the same inode, any dedupe they perform will lock out other scanner threads in btrfs. Avoid this by serializing Task objects that reference the same file. This does theoretically use an unbounded amount of memory, but in practice a Task that encounters a contended extent or inode quickly stops spawning new Tasks that might increase the queue size, and all Tasks that might contend for the same lock(s) end up on a single FIFO queue. Note that the scope of inode locks is intentionally global, i.e. when an inode is locked, it locks every inode with the same number in every subvol. This avoids significant lock contention and task queue growth when the same inode with the same file extents appear in snapshots. Fixes: https://github.com/Zygo/bees/issues/158 Signed-off-by: Zygo Blaxell <bees@furryterror.org>
This commit is contained in:
@ -363,6 +363,8 @@ scanf
|
||||
|
||||
The `scanf` event group consists of operations related to `BeesContext::scan_forward`. This is the entry point where `crawl` schedules new data for scanning.
|
||||
|
||||
* `scanf_deferred_extent`: Two tasks attempted to scan the same extent at the same time, so one was deferred.
|
||||
* `scanf_deferred_inode`: Two tasks attempted to scan the same inode at the same time, so one was deferred.
|
||||
* `scanf_extent`: A btrfs extent item was scanned.
|
||||
* `scanf_extent_ms`: Total thread-seconds spent scanning btrfs extent items.
|
||||
* `scanf_total`: A logical byte range of a file was scanned.
|
||||
|
Reference in New Issue
Block a user