Do rebuild bees-version.cc if libcrucible changes.
Don't rebuild bees-version.cc if it doesn't change.
Also use the standard suffix for new files.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
These tools are obsolete. fiemap was a thin wrapper around FIEMAP,
but FIEMAP is not useful on btrfs. fiewalk was a thin wrapper around
BtrfsExtentWalker, but development on BtrfsExtentWalker has been
abandoned.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
When a hash table write fails, we skip over the write throttling because
we didn't report that we successfully wrote an extent. This can be bad
if the filesystem is full and the allocations for writes are burning a
lot of CPU time searching for free space.
We also don't retry the write later on since we assume the extent is
clean after a write attempt whether it was successful or not, so the
extent might not be written out later when writes are possible again.
Check whether a hash extent is dirty, and always throttle after
attempting the write.
If a write fails, leave the extent dirty so we attempt to write it out
the next time flush cycles through the hash table. During shutdown
this will reattempt each failing write once, after that the updated hash
table data will be dropped.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Calling 'bees -m4' should not call 'std::terminate()', but it does.
Use catch_all instead. It will still pass the exit value to return
from main.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
BEESTOOLONG was always reporting a size of zero, and the offset of the
end of the readahead region. Report the original size instead (and also
in BEESTRACE and BEESNOTE).
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Drop the crawl_restart counter, it doesn't happen here (or anywhere else).
Add the crawl_again counter for extents that are restarted due to an
extent-level lock.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
libcrucible can deal with the Linux kernel and/or libc's thread name
limitations. No need to duplicate that work in bees.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
The caller of scan_forward has to stop advancing the BeesFileCrawl
position when an extent lock blocks a scan, so that it will resume
from the same position when the Task is scheduled again; otherwise,
bees simply skips over the extent and leave it incompletely deduped.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Restart crawl_more (and update crawl roots and flush FD caches) every
time the transid changes, and only when the transid changes, but
not more often than a reasonable minimum poll interval.
Clean up the log message: use the proper thread name and remove
the wildly inaccurate estimate of when crawl will resume.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
We don't need to cache 65536 extent maps, especially if each one
can have almost 700K references.
Valgrind's massif tool points to the extent map cache as a very
large memory allocator, but test runs with memcg disagree.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
If we have loadavg targeting enabled, there may be no worker threads
available to respond to new subvols, so we should not bother updating
the subvols list.
Put insert_new_crawl into a Task so it only executes when a worker
is available.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
On large filesystems where the min_transid of all subvols gets stuck at 0,
bees may lose the ability to effectively track recent data. A secondary sort
by max_transid will allow scanning newer subvols that were created after bees
started running on the filesystem, but before bees completed the first scan
of all subvols.
On the other hand, the secondary sort does a reverse version of the
sequential scan mode, and the sequential scan mode is simply awful.
Disable it for now.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Split each scan mode into two distinct phases:
1. A heavy discovery phase, where we search the entire filesystem
for something (new items in subvol trees in this case).
2. A light consuming phase, where we fetch extents to dedupe
from places that we found in the discovery phase.
Part 1 recomputes the subvol ordering every time there is a new transid.
For some scan modes this computation is quite expensive, far too costly
to pay for every extent, so we do it no more than once per transaction.
Part 2 is run every time a worker thread hits the crawl_more Task.
It simply pulls one extent from the first crawler off a sorted list,
removing the crawler from the list when the crawler runs out of data.
Part 1 creates a new structure and swaps it into place, while Part 2
continues to run using the previous strucuture. Neither of these
need to block the other, so they don't.
The separate class and base pointer also make it easer to add new scan
modes that are not based on subvol trees or that don't use BeesCrawl.
While we're here, fix up some method visibility in BeesRoots.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Set the constructor's default scan mode to an invalid mode, so if we
change the default, we don't have to update two places.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Crawl mode 3 'recent' prioritizes data from new updates to previously
scanned subvols over subvols that have not been completely scanned yet.
If no such new data exists, falls back to a variation of 'lockstep'
scan mode.
This enables us to keep up with new data as it arrives, a key weakness
of all the other scan modes, and worth violating our unwritten "no new
scan modes until we have extent-tree dedupe working" policy for.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Inode-oriented scan workers must do all of their work sequentially,
so it's counterproductive to spawn a Task to do a background dedupe.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
When two Tasks attempt to lock the same extent, append the later Task
to the earlier Task's post-exec work queue. This will guarantee that
all Tasks which attempt to manipulate the same extent will execute
sequentially, and free up threads to process other extents.
Similarly, if two scanner threads operate on the same inode, any dedupe
they perform will lock out other scanner threads in btrfs. Avoid this
by serializing Task objects that reference the same file.
This does theoretically use an unbounded amount of memory, but in practice
a Task that encounters a contended extent or inode quickly stops spawning
new Tasks that might increase the queue size, and all Tasks that might
contend for the same lock(s) end up on a single FIFO queue.
Note that the scope of inode locks is intentionally global, i.e. when
an inode is locked, it locks every inode with the same number in every
subvol. This avoids significant lock contention and task queue growth
when the same inode with the same file extents appear in snapshots.
Fixes: https://github.com/Zygo/bees/issues/158
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Split crawlers into two separate Tasks:
1. a Task which locates the next inode with a new data extent.
2. a Task which scans every new extent in that inode.
This simplifies some lock contention and execution ordering issues.
Files are read sequentially. Workers dynamically scale up or
down as needed, without creating thousands of deferred Task objects.
Workers obtain inode locks for different inodes in btrfs, so they
can work in parallel instead of waiting for each other.
This change in behavior comes with new names for the worker Tasks:
"crawl_master" is now "crawl_more", the singular Task which
creates inode-scanning Tasks.
"crawl_<subvol>" is now "crawl_<subvol>_<inode>".
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
This was done on the development branch three years ago, and
has been creating annoying merge conflicts ever since. Sync
up the branches so they have the same names for these.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Drop the cache since we no longer have to open a file every time we
check a subvol's status.
Also stop counting workaround events at the root level twice.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Kernels that needed the balance workaround frankly are too buggy
to run bees at all. The workaround also makes the locking stories
around logical_ino calls and process exit complicated, so get rid of
it completely.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
At some point BtrfsExtentWalker will be fully deprecated and removed from
bees. Might as well start with code that hasn't been built in 6 years.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Quite often bees exceeds its service timeout for termination because
it is waiting for a loop embedded in a Task to finish some long-running
btrfs operation. This can cause bees to be aborted by SIGKILL before
it can completely flush the hash table or save crawl state.
There are only two important things SIGTERM does when bees terminates:
1. Save crawl progress
2. Flush out the hash table
Everything else is automatically handled by the kernel when the process
is terminated by SIGKILL, so we don't have to bother doing it ourselves.
This can save considerable time at shutdown since we don't have to wait
for every thread to reach a point where it becomes idle, or force loops
to terminate by throwing exceptions, or check a condition every time we
access a pointer. Instead, we need do only the things in the list
above, and then call _exit() to clean up everything else.
Hash table and crawl state writeback can happen in their background
threads instead of the foreground one. Separate the "stop" method for
these classes into "stop_request" and "stop_wait" so that these writebacks
can run at the same time.
Deprecate and remove all references to the BeesHalt exception, and remove
several unnecessary checks for BeesContext::stop_requested.
Pause the task queue instead of cancelling it, which preserves the
crawl progress state and stops new Tasks from competing for iops and
CPU during writeback.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Dump the instantaneous load (last 5 seconds, extracted from load average)
and the computed target worker count (before rounding and truncation)
on the same status line as the task and worker thread count.
This should give better visibility into Task's thread count calculation
algorithm.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
It seems that readahead() does not work on btrfs, or at least it has
no discernable effect. Enable the workaround instead.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
In current kernels there is a bug which leads to an infinite loop in
add_all_parents(). The bug is triggered by one thread running dedupe
while another runs logical_ino.
Work around this by ensuring that bees process never runs dedupe and
logical_ino ioctls at the same time. Any number of either can run
at the same time, but not one of both.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
bees_sync() was an exception-trapping wrapper around fsync() which is
not needed in any of the contexts from which it was called:
1. dedupe operations implicitly flush the src data, so there is
no need to call fsync() to do that twice.
2. crawl position is written to a temporary file and renamed
over the original, which always forces a flush when the original
exists. On the first write, where there is no original, a
crash would result in starting over with an empty or hole-filled
beescrawl file, which is the initial state of bees. There is also
a long history of kernel bugs triggered by fsync() in this case.
3. we use unreadahead to trigger writeback for flushing the
hash table to persistent storage. Here is a space where we might
use fsync after all, as part of bees_unreadahead's emulation of
POSIX_FADV_DONTNEED, but we need to get read-once behavior from
the scanner before we can use this capability.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
If there's an error while writing the crawl state, the state should
remain dirty. If the crawl state is successfully written, the state
is only clean if there were no changes to crawl state since the write
was committed. We need to release the lock while writing the state but
correctly set the dirty flag when the state is written successfully.
Replace the bool with a version number counter. Track the last version
successfully saved and the current version of the crawl state. The state
is dirty if these counters disagree and clean if they agree.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
If we iterate over all roots to find the max transid, but the set of
all roots is empty, we'll get a nonsense number. Make sure that number
doesn't reach the crawling logic by killing it with an exception.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Yet another build failure of the form:
error: flexible array member fiemap... not at end of struct crucible::Fiemap...
bees doesn't use fiemap any more, so the fixes here are minimal changes
to make it build, not shining examples of C++ class design.
Signer-off-by: Zygo Blaxell <bees@furryterror.org>
We had an unfortunate pattern of:
const BeesFileRange bfr;
shared_ptr<BeesContext> ctx;
// ...
BEESNOTE("foo " << bfr);
bfr.fd(ctx);
BEESNOTE("foo after opening: " << bfr);
If dump_status started running after the first BEESNOTE, but before
the second, then bfr.fd() might expose a single Fd object's shared_ptr
member to two threads at the same time (the thread running dump_status
and the thread running BEESNOTE) without protection by a lock. One of
the threads would see a partially-initialized Fd object, and the other
thread would crash on an assertion failure, e.g.
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007f4c4fde5537 in __GI_abort () at abort.c:79
#2 0x00007f4c4fde540f in __assert_fail_base (fmt=0x7f4c4ff4e128 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x5557605629dd "!m_destroyed", file=0x5557605627c0 "../include/crucible/namedptr.h", line=77, function=<optimized out>) at assert.c:92
#3 0x00007f4c4fdf4662 in __GI___assert_fail (assertion=assertion@entry=0x5557605629dd "!m_destroyed", file=file@entry=0x5557605627c0 "../include/crucible/namedptr.h", line=line@entry=77,
function=function@entry=0x555760562970 "crucible::NamedPtr<Return, Arguments>::Value::~Value() [with Return = crucible::IOHandle; Arguments = {int}]") at assert.c:101
#4 0x00005557605306f6 in crucible::NamedPtr<crucible::IOHandle, int>::Value::~Value (this=0x7f4a3c2ff0d0, __in_chrg=<optimized out>) at ../include/crucible/namedptr.h:77
#5 0x00005557605137da in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x7f4a3c2ff0c0) at /usr/include/c++/10/bits/shared_ptr_base.h:151
#6 std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x7f4a3c2ff0c0) at /usr/include/c++/10/bits/shared_ptr_base.h:151
#7 std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x7f4c4c5b5f28, __in_chrg=<optimized out>) at /usr/include/c++/10/bits/shared_ptr_base.h:733
#8 std::__shared_ptr<crucible::IOHandle, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x7f4c4c5b5f20, __in_chrg=<optimized out>) at /usr/include/c++/10/bits/shared_ptr_base.h:1183
#9 std::shared_ptr<crucible::IOHandle>::~shared_ptr (this=0x7f4c4c5b5f20, __in_chrg=<optimized out>) at /usr/include/c++/10/bits/shared_ptr.h:121
#10 crucible::Fd::~Fd (this=0x7f4c4c5b5f20, __in_chrg=<optimized out>) at ../include/crucible/fd.h:46
#11 BeesFileRange::file_size (this=0x7f4c4e5ba4a0) at bees-types.cc:156
#12 0x0000555760513950 in operator<< (os=..., bfr=...) at bees-types.cc:80
#13 0x000055576050d662 in std::function<void (std::ostream&)>::operator()(std::ostream&) const (__args#0=..., this=0x7f4c4e5b9f60) at /usr/include/c++/10/bits/std_function.h:622
#14 BeesNote::get_status[abi:cxx11]() () at bees-trace.cc:165
#15 0x00005557604c9676 in BeesContext::dump_status (this=0x5557611c4de0) at bees-context.cc:89
#16 0x00005557605206fb in std::function<void ()>::operator()() const (this=this@entry=0x7f4c4c5b65f0) at /usr/include/c++/10/bits/std_function.h:622
#17 crucible::catch_all(std::function<void ()> const&, std::function<void (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)> const&) (f=..., explainer=...) at error.cc:55
#18 0x000055576050aaa7 in operator() (__closure=0x5557611c52c8) at bees-thread.cc:22
#19 0x00007f4c501beed0 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#20 0x00007f4c502c8ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
#21 0x00007f4c4febddef in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Fix by making BeesFileRange::m_fd really const (not just mutable),
then fix all the broken code referencing it.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
It turns out we never set m_dirty's initial value. This is not a
practical problem because 1) it's mostly harmless if m_dirty is spuriously
true, 2) we set it to true every time bees scans a data block, and 3)
the allocation happens early in startup when most memory allocations
are using zero-filled pages, so it's probably getting a false value at
construction in most cases.
valgrind complains about it, so it has to go.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Once the physical addresses are known, put them where they can be
seen in BEESTATUS as well as the log.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
There are kernel bugs in LOGICAL_INO from time to time; however, we
can't avoid these bugs by serializing LOGICAL_INO calls.
It hasn't been used for some time, so remove the code and
less-than-completely-accurate comments.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
For one thing, it should _say_ that there are too many duplicates.
We were making the user read the manual to find that out.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Forcing the entire hash table into immediate writeback causes crippling
write latencies at shutdown. Even discarding pages as they are read in
at startup can trigger a writeback latency spike if the pages are dirty
at read time.
Better to let the VM subsystem handle this on its own.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Putting this information in the logs saves us from having to ask for
the kernel version and machine name every time.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
We need random numbers in more places, so centralize the engines.
Initialize with a proper random seed so every worker thread gets
different behavior.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>