This obviously doesn't fix or prevent the kernel bug, but it does prevent
bees from triggering the bug without assitance from another application.
The bug can still be triggered by running bees at the same time as an
application which uses clone or LOGICAL_INO. `btdu` uses LOGICAL_INO,
while `cp` from coreutils (and many others) use clone (reflink copy).
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
In commit 31b2aa3c0dcf042559fa495b63a1bf22bac4d55d ("context: speed
up orderly process termination"), the stop request was split into two
methods after the mutex unlock.
Now that there's nothing after the mutex unlock in `stop_request`,
there's no need for an explicit unlock to do what the destructor would
have done anyway.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Parallel scan runs each extent size tier in a separate thread. The
threads compete to process extents within the tier's size range.
Ordered scan processes each extent size tier completely before moving on
to the next. In theory, this means large extents always get processed
quickly, especially when new ones appear, and the queue does not fill up
with small extents.
In practice, the multi-threaded scanner massively outperforms the
single-threaded scanner, unless the number of worker threads is very
small (i.e. one).
Disable most of the feature for now, but leave the code in place so it
can be easily reactivated for future testing.
Ordered scan introduces a parallelized extent mapper Task. Keep that in
parallel scan mode, which further enhances the parallelism. The extent
scan crawl threads now run at 'idle' priority while the map tasks run
at normal priority, so the map tasks don't flood the task queue.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
BeesScanModeExtent can do that by itself now. Overloading the subvol
crawl code resulted in an ugly, inefficient hack, and we definitely
don't want to accidentally continue to use it.
Remove the support for reading the extent tree and add some `assert`s
to make sure it isn't still used somewhere.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
The main gains here are:
* Move extent tree searches into BeesScanModeExtent so that they are
not slowed down by the BeesCrawl code, which was designed for the
much more specialized metadata in subvol trees.
* Enable short extent skipping now that BeesCrawl is out of the way.
* Stop enumerating btrfs subvols when in extent scan mode.
All this gets rid of >99% of unnecessary extent tree searches.
Incremental extent scan cycles now finish in milliseconds instead
of minutes.
BeesCrawl was never designed to cope with the structure and content of
the extent tree. It would waste thousands of tree-search ioctl calls
reading and ignoring metadata items.
Performance was particularly bad when a binary search was involved, as any
binary search probe that landed in a metadata block group would read and
discard all the metadata items in the block group, sequentially, repeated
for each level of the binary search. This was blocking implementation of
short extent skipping optimization for large extent size tiers, because
the skips were using thousands of tree searches to skip over only a few
hundred extent items.
Extent scan also had to read every extent item twice to do the
transid filtering, because BeesCrawl's interface discarded the relevant
information when it converted a `BtrfsTreeItem` into a `BeesFileRange`.
The cost of this extra fetch was negligible, but it could have been zero.
Fix this by:
* Copy the equivalent of `fetch_extents` from BeesCrawl into
`BeesScanModeExtent`, then give each of the extent scan crawlers its
own `BtrfsDataExtentTreeFetcher` instance. This enables extent tree
searches to avoid pure (non-mixed) metadata block groups. `BeesCrawl`
is now used only for its interface to `BeesRoots` for saving state in
`beescrawl.dat`, and never to determine the next extent tree item.
* Move subvol-specific parts of `BeesRoots` into a new class
`BeesScanModeSubvol` so that `BtrfsScanModeExtent` doesn't have to enable
or support them. In particular, `bees -m4` no longer enumerates all
of the _subvol_ crawlers. `BeesRoots` is still used to save and load
crawl state.
* Move several members from `BtrfsScanModeExtent` into a per-crawler
state object `SizeTier` to eliminate the need for some locks and to
maintain separate cache state for `BtrfsDataExtentTreeFetcher`.
* Reuse the `BtrfsTreeItem` to get the generation field for the transid
range filter.
* Avoid a few corner cases when handling errors, where extent scan might
drop an extent without scanning it, or fail to advance to the next extent.
* Enable the extent-skipping algorithm for large size tiers, now that
`BeesCrawl::fetch_extents` is no longer slowing it down.
* Add a debug stream interface which developers can easily turn on when
needed to inspect the decisions that extent scan is making.
* Track metrics that are more useful, particularly searches per extent
scanned, and fraction of extents that are skipped.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
This gets rid of one open-coded btrfs tree search.
Also reduce the log noise level for subvol open failures, and remove
some ancient references to `BEESLOG`.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Rearrange the logic in `rlower_bound` so it can cope with a tree
that contains mostly block-aligned objects, with a few exceptions
filtered out by `hdr_stop`.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
In some cases functions already had existing debug stream support
which can be redirected to the new interface. In other cases, new
debug messages are added.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
This allows plugging in an ostream at run time so that we can audit all
the search calls we are doing.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
BtrfsFsTreeFetcher was used for early versions of the extent scanner, but
neither subvol nor extent scan now needs an object that is both persistent
and configured to access only one subvol. BtrfsExtentDataFetcher does
the same thing in that case.
Clarify the comments on what the remaining classes do, so that
BtrfsFsTreeFetcher doesn't get inadvertently reinvented in the future.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Binary searches can be extremely slow if the target bytenr is near a
metadata block group, because metadata items are not visible to the
binary search algorithm. In a non-mixed-bg filesystem, there can be
hundreds of thousands of metadata items between data extent items, and
since the binary search algorithm can't see them, it will run searches
that iterate over hundreds of thousands of objects about a dozen times.
This is less of a problem for mixed-bg filesystems because the data and
metadata blocks are not isolated from each other. The binary search
algorithm still can't see the metadata items, but there are usually
some data items close by to prevent the linear item filter from running
too long.
Introduce a new fetcher class (all the good names were taken) that tracks
where the end of the current block group is. When the end of the current
block group is reached in the linear search, skip ahead to a block group
that can contain data items.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
The cwd is where core dumps and various profiling and verification
libraries want to write their data, whereas root_fd is the root of the
target filesystem. These are often intentionally different. When
they are different, `--strip-paths` sets the wrong prefix to strip
from paths.
Once the root fd has been established, we can set the path prefix to
the string prefix that we'll get from future calls to `name_fd`.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
bees explicitly supports storing $BEESHOME on another filesystem, and
does not require that filesystem to be btrfs; however, if $BEESHOME
is on a non-btrfs filesystem, there is an exception on every startup
when trying to identify the subvol root of the hash table file in order
to blacklist it, because non-btrfs filesystems don't have subvol roots.
Fix by checking not only whether $BEESHOME is on btrfs, but whether it
is on the _same_ btrfs, as the bees root, without throwing an exception.
The hash table is blacklisted only when both filesystems are btrfs and
have the same fsid.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Enable use of the ioctl to probe whether two fds refer to the same btrfs,
without throwing an exception.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
The debug log is only revealed when something goes wrong, but it is
created and discarded every time `seek_backward` is called, and it
is quite CPU-intensive.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Remove dubious comments and #if 0 section. Document new event counters,
and add one for read failures.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
1% is a lot of data on a petabyte filesystem, and a long time to wait for an
ETA.
After 1 GiB we should have some idea of how fast we're reading the data.
Increase the time to 10 seconds to avoid a nonsense result just after a scan
starts.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
* `nodev`: This reduces rename attack surface by preventing bees from
opening any device file on the target filesystem.
* `noexec`: This prevents access to the mount point from being leveraged
to execute setuid binaries, or execute anything at all through the
mount point.
These options are not required because they duplicate features in the
bees binary (assuming that the mount namespace remains private):
* `noatime`: bees always opens every file with `O_NOATIME`, making
this option redundant.
* `nosymfollow`: bees uses `openat2` on kernels 5.6 and later with
flags that prevent symlink attacks. `nosymfollow` was introduced in
kernel 5.10, so every kernel that can do `nosymfollow` can already do
`openat2`. Also, historically, `$BEESHOME` can be a relative path with
symlinks in any path component except the last one, and `nosymfollow`
doesn't allow that.
Between `openat2` and `nodev`, all symlink attacks are prevented, and
rename attacks cannot be used to force bees to open a device file.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
We _recommend_ that `$BEESHOME` should be a subvol, and we'll create a
subvol if no directory exists; however, there's no reason to reject an
existing plain directory if the user chooses to use one.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
If starting the beesd script without systemd, the mount point won't
automatically unmount if the script is cancelled with ctrl+c.
Fixes: https://github.com/Zygo/bees/issues/281
Signed-off-by: Kai Krakow <kai@kaishome.de>
There's a pathological case where all of the extent scan crawlers except
one are at the end of a crawl cycle, but the one crawler that is still
running is keeping the Task queue full. The result is that bees never
starts the other extent scan crawlers, because the queue is always
full at the instant a new transid triggers the start of a new scan.
That's bad because it will result in bees falling behind when new data
from the inactive size tiers appears.
To fix this, check for throttling _after_ creating at least one scan task
in each crawler. That will keep the crawlers running, and possibly allow
them to claw back some space in the Task queue. It slightly overcommits
the Task queue, so there will be a few more Tasks than nominally allowed.
Also (re)introduce some hysteresis in the queue size limit and reduce it
a little, so that bees isn't continually stopping and restarting crawls
every time one task is created or completed, and so that we stay under
the configured Task limit despite overcommitting.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Toxic extent workarounds are going away because the underlying kernel
bugs have been fixed. They are no longer worthy of spamming non-developer
logs.
INO_PATHS can return no paths if an inode has been deleted. It doesn't
need a log message at all, much less one at WARN level.
Dedupe failure can be INFO, the same level as dedupe itself, especially
since the "NO dedupe" message doesn't mention what was [not] deduped.
Inspired by Kai Krakow's "context: demote "abandoned toxic match" to
debug log level".
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
This log message creates a overwhelmingly lot of messages in the system
journal, leading to write-back flushing storms under high activity. As
it is a work-around message, it is probably only useful to developers,
thus demote to debug level.
This fixes latency spikes in desktop usage after adding a lot of new
files, especially since systemd-journal starts to flush caches if it
sees memory pressure.
Signed-off-by: Kai Krakow <kai@kaishome.de>
Tasks are not allowed to be queued more than once, but it is allowed
to queue a Task while it's already running, which means a Task can be
executed on two threads in parallel. Tasks detect this and handle it
by queueing the Task on its own post-exec queue. That in turn leads
to Workers which continually execute the same Task if that Task doesn't
create any new Tasks, while other Tasks sit on the Master queue waiting
for a Worker to dequeue them.
For idle Tasks, we don't want the Task to be rescheduled immediately.
We want the idle Task to execute again after every available Task on
both the main and idle queues has been executed.
Fix these by having each Task reschedule itself on the appropriate
queue when it finishes executing.
Priority queued Tasks should executed in priority order not just one
Task's post-exec queue, but the entire local queue of the TaskConsumer.
Fix this by moving the sort into either the TaskConsumer that receives
a post-exec queue, if there is one, or into the Task that is created
to insert the post-exec queue into a TaskConsumer when one becomes
available in the future.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
next_transid tasks don't respect queue selection very well, because
they effectively end up spinning in a loop until all other worker
threads become busy.
Back this out, and fix the priority handling in the Task library.
This reverts commit 58db4071de5f524c35b1362bfb5b1fceedea503f.
Tasks using non-priority FIFO dependency tracking can insert themselves
into their own queue, to run the Task again immediately after it exits.
For priority queues, this attempts to splice the post-exec queue into
itself, which doesn't seem like a good idea.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Suppose Task A, B, and C are created in that order, and currently running.
Task T acquires Exclusion E. Task B, A, and C attempt to acquire the
same Exclusion, in that order, but fail because Task T holds it.
The result is Task T with a post-exec queue:
T, [ B, A, C ] sort_requested
Now suppose Task U acquires Exclusion F, then Task T attempts to acquire
Exclusion F. Task T fails to acquire F, so T is inserted into U's
post-exec queue. The result at the end of the execution of T is a tree:
U, [ T ] sort_requested
\-> [ B, A, C ] sort_requested
Task T exits after failing to acquire a lock. When T exits, T will
sort its post-exec queue and submit the post-exec queue for execution
immediately:
Worker 1: U, [ T ] sort_requested
Worker 2: A, B, C
This isn't ideal because T, A, B, and C all depend on at least one
common Exclusion, so they are likely to immediately conflict with T
when U exits and T runs again.
Ideally, A, B, and C would at least remain in a common queue with T,
and ideally that queue is sorted.
Instead of inserting T into U's post-exec queue, insert T and all
of T's post-exec queue, which creates a single flattened Task list:
U, [ T, B, A, C ] sort_requested
Then when U exits, it will sort [ T, B, A, C ] into [ A, B, C, T ],
and run all of the queued Tasks in age priority order:
U exited, [ T, B, A, C ] sort_requested
U exited, [ A, B, C, T ]
[ A, B, C, T ] on TaskConsumer queue
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Task started out as a self-organizing parallel-make algorithm, but ended
up becoming a half-broken wait-die algorithm. When a contended object
is already locked, Tasks enter a FIFO queue to restart and acquire the
lock. This is the "die" part of wait-die (all locks on an Exclusion are
non-blocking, so no Task ever does "wait"). The lock queue is FIFO wrt
_lock acquisition order_, not _Task age_ as required by the wait-die
algorithm.
Make it a 25%-broken wait-die algorithm by sorting the Tasks on lock
queues in order of Task ID, i.e. oldest-first, or FIFO wrt Task age.
This ensures the oldest Task waiting for an object is the one to get
it when it becomes available, as expected from the wait-die algorithm.
This should reduce the amount of time Tasks spend on the execution queue,
and reduce memory usage by avoiding the accumulation of Tasks that cannot
make forward progress.
Note that turning `TaskQueue` into an ordered container would have
undesirable side-effects:
* `std::list` has some useful properties wrt stability of object
location and cost of splicing. Other containers may not have these,
and `std::list` does have a `sort` method.
* Some Task objects are created at the beginning and reused continually,
but we really do want those Tasks to be executed in FIFO order wrt
submission, not Task ID. We can exclude these tasks by only doing the
sorting when a Task is queued for an Exclusin object.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Emphasize that the option is relevant to old kernels, older than the
minimum supportable version threshold.
De-emphasize the use case of "send-workaround" as a synonym for "exclude
read-only".
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
One of the more obvious ways to reduce bees load is to simply not run
it all the time. Explicitly state using maintenance windows as a load
management option.
SIGUSR1 and SIGUSR2 should have been documented somewhere else before now.
Better late than never.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
The theories behind bees slowing down when presented with a larger has
table turned out to be wrong. The real cause was a very old bug which
submitted thousands of `LOGICAL_INO` requests when only a handful of
requests were needed.
"Compression on the filesystem" -> "Compression in files"
Don't be so "dramatic". Be "rapid" instead.
Remove "cannot avoid modifying read-only snapshots" as a distinction
between subvol and extent scans. Both modes support send workaround
and send waiting with no significant distinction.
Emphasize extent scan's better handling of many snapshots. Also reflinks.
Add some discussion of `--throttle-factor`.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Thread names have changed. Document some of the newer ones.
Don't jump immediately to blaming poor performance on qgroups or
autodefrag. These do sometimes have kernel regressions but not all
the time.
Emphasize advantage of controlling bees deferred work requests at the
source, before btrfs gets stuck committing them.
Avoid asserting that it's OK for gdb to crash.
Remove mention of lower-layer block device issues wrt corruption.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
"Kernel" -> "Linux kernel". If you can run bees on a kernel that isn't
Linux, congratulations!
Emphasize the age of the data corruption warnings. Once 5.4 reaches
EOL we can remove those.
Simplify the discussion of old kernels and API levels. There's a
new optional kernel API for `openat2` support at 5.6. The absolute
minimum kernel version is still 4.2, and will not increase to 4.15
until the subvol scanners are removed.
Remove discussion of bees support for kernels 4.19 (which recently
reached EOL) and earlier.
The `LOGICAL_INO` vs dedupe bug is actually a `LOGICAL_INO` vs clone bug.
Dedupe isn't necessary to reproduce it.
Remove a stray ')'.
Strip out most of the discussion of slow backrefs, as they are no longer a
concern on the range of supported kernel versions. Leave some description
there because bees still has some vestigial workarounds.
Remove `btrfs send` from the "Unfixed kernel bugs" section, which makes
the section empty, so remove the section too. bees now handles send on
a subvol reasonably well.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Emphasize "large" is an upper bound on the size of filesystem bees
can handle.
New strengths: largest extent first for fixed maintenance windows,
scans data only once (ish), recovers more space
Removed weaknesses: less temporary space
Need more caps than `CAP_SYS_ADMIN`.
Emphasize DATA CORRUPTION WARNING is an old-kernel thing.
Update copyright year.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Tested on larger filesystems than 100T too, but let's use Fermi
approximation. Next size is 1P.
Removed interaction with block-level SSD caching subsystems. These are
really btrfs metadata vs. a lower block layer, and have nothing to do
with bees.
Added mixed block groups to the tested list, as mixed block groups
required explicit support in the extent scanner.
Added btrfs-convert to the tested list. btrfs-convert has various
problems with space allocation in general, but these can be solved by
carefully ordered balances after conversion, and they have nothing to
do with bees.
In-kernel dedupe is dead and the stubs were removed years ago. Remove it
from the list.
btrfs send now plays nicely with bees on all supportable kernels, now
that stable/linux-4.19.y is dead. Send workaround is only needed for
kernels before v5.4 (technically v5.2, but nobody should ever mount a
btrfs with kernel v5.1 to v5.3). bees will pause automatically when
deduping a subvol that is currently running a send.
bees will no longer gratuitously refragment data that was defragmented
by autodefrag.
Explicitly list all the RAID profiles tested so far, as there have been
some new ones.
Explicitly list other deduplicators tested.
Sort the list of btrfs features alphabetically.
Add scrub and balance, which have been tested with bees since the
beginning.
New tested btrfs features: block-group-tree, raid1c3, raid1c4.
New untested btrfs features: squotas, raid-stripe-tree.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
This records the time when the progress data was calculated, to help
indicate when the data might be very old.
While we're here, move "now" out of the loop so there's only one value.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
This increases resistance to symlink and mount attacks.
Previously, bees could follow a symlink or a mount point in a directory
component of a subvol or file name. Once the file is opened, the open
file descriptor would be checked to see if its subvol and inode matches
the expected file in the target filesystem. Files that fail to match
would be immediately closed.
With openat2 resolve flags, symlinks and mount points terminate path
resolution in the kernel. Paths that lead through symlinks or onto
mount points cannot be opened at all.
Fall back to openat() if openat2() returns ENOSYS, so bees will still
run on kernels before v5.6.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Since we're now using weak symbols for dodgy libc functions, we might
as well do it for gettid() too.
Use the ::gettid() global namespace and let libc override it.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
openat2 allows closing more TOCTOU holes, but we can only use it when
the kernel supports it.
This should disappear seamlessly when libc implements the function.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
"ctime", an abbreviation of "cycle time", collides with "ctime", an
abbreviation of "st_ctime", a well-known filesystem term.
"tm_left" fits in the column, so use that.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
* Report position within cycle in units that cannot be mistaken for size or percentage
* Put the total/maximum values in their own row
* Add a start time column
* Change column titles to reference "cycles"
* Use "idle" instead of "finished" when a crawler is not running
* Replace "transid" with "gen" because it's shorter
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
The scanners which finish early can become stuck behind scanners that are
able to keep the queue full. Switch the next_transid task to the normal
Task queues so that we force scanners to restart on every new transaction,
possibly deferring already queued work to do so.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Add yet another field to the scan/skip report line: the wallclock
time used to process the extent ref.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>