Dump the instantaneous load (last 5 seconds, extracted from load average)
and the computed target worker count (before rounding and truncation)
on the same status line as the task and worker thread count.
This should give better visibility into Task's thread count calculation
algorithm.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
It seems that readahead() does not work on btrfs, or at least it has
no discernable effect. Enable the workaround instead.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
In current kernels there is a bug which leads to an infinite loop in
add_all_parents(). The bug is triggered by one thread running dedupe
while another runs logical_ino.
Work around this by ensuring that bees process never runs dedupe and
logical_ino ioctls at the same time. Any number of either can run
at the same time, but not one of both.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
bees_sync() was an exception-trapping wrapper around fsync() which is
not needed in any of the contexts from which it was called:
1. dedupe operations implicitly flush the src data, so there is
no need to call fsync() to do that twice.
2. crawl position is written to a temporary file and renamed
over the original, which always forces a flush when the original
exists. On the first write, where there is no original, a
crash would result in starting over with an empty or hole-filled
beescrawl file, which is the initial state of bees. There is also
a long history of kernel bugs triggered by fsync() in this case.
3. we use unreadahead to trigger writeback for flushing the
hash table to persistent storage. Here is a space where we might
use fsync after all, as part of bees_unreadahead's emulation of
POSIX_FADV_DONTNEED, but we need to get read-once behavior from
the scanner before we can use this capability.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
If there's an error while writing the crawl state, the state should
remain dirty. If the crawl state is successfully written, the state
is only clean if there were no changes to crawl state since the write
was committed. We need to release the lock while writing the state but
correctly set the dirty flag when the state is written successfully.
Replace the bool with a version number counter. Track the last version
successfully saved and the current version of the crawl state. The state
is dirty if these counters disagree and clean if they agree.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
If we iterate over all roots to find the max transid, but the set of
all roots is empty, we'll get a nonsense number. Make sure that number
doesn't reach the crawling logic by killing it with an exception.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Yet another build failure of the form:
error: flexible array member fiemap... not at end of struct crucible::Fiemap...
bees doesn't use fiemap any more, so the fixes here are minimal changes
to make it build, not shining examples of C++ class design.
Signer-off-by: Zygo Blaxell <bees@furryterror.org>
We had an unfortunate pattern of:
const BeesFileRange bfr;
shared_ptr<BeesContext> ctx;
// ...
BEESNOTE("foo " << bfr);
bfr.fd(ctx);
BEESNOTE("foo after opening: " << bfr);
If dump_status started running after the first BEESNOTE, but before
the second, then bfr.fd() might expose a single Fd object's shared_ptr
member to two threads at the same time (the thread running dump_status
and the thread running BEESNOTE) without protection by a lock. One of
the threads would see a partially-initialized Fd object, and the other
thread would crash on an assertion failure, e.g.
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007f4c4fde5537 in __GI_abort () at abort.c:79
#2 0x00007f4c4fde540f in __assert_fail_base (fmt=0x7f4c4ff4e128 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x5557605629dd "!m_destroyed", file=0x5557605627c0 "../include/crucible/namedptr.h", line=77, function=<optimized out>) at assert.c:92
#3 0x00007f4c4fdf4662 in __GI___assert_fail (assertion=assertion@entry=0x5557605629dd "!m_destroyed", file=file@entry=0x5557605627c0 "../include/crucible/namedptr.h", line=line@entry=77,
function=function@entry=0x555760562970 "crucible::NamedPtr<Return, Arguments>::Value::~Value() [with Return = crucible::IOHandle; Arguments = {int}]") at assert.c:101
#4 0x00005557605306f6 in crucible::NamedPtr<crucible::IOHandle, int>::Value::~Value (this=0x7f4a3c2ff0d0, __in_chrg=<optimized out>) at ../include/crucible/namedptr.h:77
#5 0x00005557605137da in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x7f4a3c2ff0c0) at /usr/include/c++/10/bits/shared_ptr_base.h:151
#6 std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x7f4a3c2ff0c0) at /usr/include/c++/10/bits/shared_ptr_base.h:151
#7 std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x7f4c4c5b5f28, __in_chrg=<optimized out>) at /usr/include/c++/10/bits/shared_ptr_base.h:733
#8 std::__shared_ptr<crucible::IOHandle, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x7f4c4c5b5f20, __in_chrg=<optimized out>) at /usr/include/c++/10/bits/shared_ptr_base.h:1183
#9 std::shared_ptr<crucible::IOHandle>::~shared_ptr (this=0x7f4c4c5b5f20, __in_chrg=<optimized out>) at /usr/include/c++/10/bits/shared_ptr.h:121
#10 crucible::Fd::~Fd (this=0x7f4c4c5b5f20, __in_chrg=<optimized out>) at ../include/crucible/fd.h:46
#11 BeesFileRange::file_size (this=0x7f4c4e5ba4a0) at bees-types.cc:156
#12 0x0000555760513950 in operator<< (os=..., bfr=...) at bees-types.cc:80
#13 0x000055576050d662 in std::function<void (std::ostream&)>::operator()(std::ostream&) const (__args#0=..., this=0x7f4c4e5b9f60) at /usr/include/c++/10/bits/std_function.h:622
#14 BeesNote::get_status[abi:cxx11]() () at bees-trace.cc:165
#15 0x00005557604c9676 in BeesContext::dump_status (this=0x5557611c4de0) at bees-context.cc:89
#16 0x00005557605206fb in std::function<void ()>::operator()() const (this=this@entry=0x7f4c4c5b65f0) at /usr/include/c++/10/bits/std_function.h:622
#17 crucible::catch_all(std::function<void ()> const&, std::function<void (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)> const&) (f=..., explainer=...) at error.cc:55
#18 0x000055576050aaa7 in operator() (__closure=0x5557611c52c8) at bees-thread.cc:22
#19 0x00007f4c501beed0 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#20 0x00007f4c502c8ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
#21 0x00007f4c4febddef in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Fix by making BeesFileRange::m_fd really const (not just mutable),
then fix all the broken code referencing it.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
It turns out we never set m_dirty's initial value. This is not a
practical problem because 1) it's mostly harmless if m_dirty is spuriously
true, 2) we set it to true every time bees scans a data block, and 3)
the allocation happens early in startup when most memory allocations
are using zero-filled pages, so it's probably getting a false value at
construction in most cases.
valgrind complains about it, so it has to go.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Once the physical addresses are known, put them where they can be
seen in BEESTATUS as well as the log.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
There are kernel bugs in LOGICAL_INO from time to time; however, we
can't avoid these bugs by serializing LOGICAL_INO calls.
It hasn't been used for some time, so remove the code and
less-than-completely-accurate comments.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
For one thing, it should _say_ that there are too many duplicates.
We were making the user read the manual to find that out.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Forcing the entire hash table into immediate writeback causes crippling
write latencies at shutdown. Even discarding pages as they are read in
at startup can trigger a writeback latency spike if the pages are dirty
at read time.
Better to let the VM subsystem handle this on its own.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Putting this information in the logs saves us from having to ask for
the kernel version and machine name every time.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
We need random numbers in more places, so centralize the engines.
Initialize with a proper random seed so every worker thread gets
different behavior.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
At the end of scanning one extent, in theory we do not need that extent
any more. In practice, it hurts benchmark scores if we drop the extents
after reading them.
Add a comment to note this where we put the bees_unreadhead call.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
BEESNOTE can only be seen if the status thread is running at the time,
making the log of activities during shutdown incomplete.
Wake up the status thread early during shutdown so the logged sequence
of shutdown actions is complete.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
In the current architecture we can't directly measure the physical extent
size, and we can't make good decisions with the extent data (reference)
item alone. If the early return is enabled here, there is a small speedup
and a large drop in dedupe hit rate, especially when extent splits occur.
Leave the early return commented for now, but collect the event statistics.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Tree searches are all looking for specific item types. Skip over any
item types we are not interested in when resetting the search key for
the next search.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
BtrfsIoctlSearchKeyV2's constructor now fills in nr_items = 1, so we
don't need to set it explicitly any more.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
The vector<uint8_t> in the hash table doesn't hurt very much--only a few
microseconds per 128K hash block.
The vector<uint8_t> in BeesBlockData hurts a bit more--we run that
constructor thousands of times per second.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Previously, when the bees send workaround is enabled, bees would
immediately advance the subvol's crawl status as if the entire subvol
had been scanned.
If the subvol is later made read-write, or if the workaround is disabled,
bees sees that the subvol has already been marked as scanned. This is
an unfortunate result if the subvol is inadvertently marked read-only
or if bees is inadvertently run with the send workaround disabled.
Instead, (almost) completely ignore the subvol: don't advance the crawl
pointer, don't consider the subvol in the list if searchable roots, and
don't consider the subvol when calculating min_transid for new subvols.
The "almost" part is: if the subvol scan has not yet started, keep its
start timestamp current so it won't mess up subvol traversal performance
metrics.
Also handle exceptions while determining whether a subvol is read-only,
as those apparently do happen.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
In fiemap.h the members of struct fiemap are declared as __u64, but the
FIEMAP_MAX_OFFSET macro is an unsigned long long value:
$ grep FIEMAP_MAX_OFFSET -r /usr/include/
/usr/include/linux/fiemap.h:#define FIEMAP_MAX_OFFSET (~0ULL)
$ grep fe_length -r /usr/include/
/usr/include/linux/fiemap.h: __u64 fe_length; /* length in bytes for this extent */
This results in a type mismatch error on architectures like ppc64le:
fiemap.cc:31:35: note: deduced conflicting types for parameter 'const _Tp' ('long unsigned int' and 'long long unsigned int')
31 | fm.fm_length = min(fm.fm_length, FIEMAP_MAX_OFFSET - fm.fm_start);
| ~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Work around this by copying the macro into a uint64_t constant,
and not using the macro any more.
Fixes: https://github.com/Zygo/bees/issues/194
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
The hash table is one of the few cases in bees where a non-trivial amount
of page cache memory will be used in a predictable way, so we can advise
the kernel about our IO demands in advance.
Use WILLNEED to prefetch hash table pages at startup.
Use DONTNEED to trigger writeback on hash table pages at shutdown.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
In theory, we don't need the pread() loop, because the kernel will do a
better job with readahead().
In practice, we might still need the pread() code, as the readahead will
occur at idle IO priority, which could adversely affect bees performance.
More testing is required.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Like filefrag, fiemap was defaulting to FIEMAP_FLAG_SYNC, and providing no
option to turn it off. This prevents observation of delayed allocations,
making fiemap less useful.
Override the default flag setting so fiemap gets the current
(i.e. unflushed) extent map state.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
LOGICAL_INO_V2 has a maximum limit of 655050 references per extent.
Although it no longer has a crippling performance problem, at roughly
two seconds to process extent, it's too slow to be useful.
When an extent gains an absurd number of references, stop making any
more. Returning zero extent refs will make bees believe the extent
was deleted, and it will remove the block from the hash table.
This helps speed processing of highly duplicated large files like
VM images, and the cost of a slightly lower dedupe hit rate.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
In 15ab981d9e "bees: replace uncaught_exception(), deprecated in C++17",
uncaught_exception() was replaced with current_exception(); however,
current_exception() is only valid after an exception has been captured
by a catch block.
BeesTracer wants to know about exceptions _before_ they are caught,
so current_exception() is not useful here.
Instead, conditionally compile using uncaught_exception() or
uncaught_exceptions(), selected by C++ standard version, and make
bees stack traces work again.
Fixes: 15ab981d9e "bees: replace uncaught_exception(), deprecated in C++17"
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
This allows these components to be used by test executables without
pulling in all of bees, and more rapidly iterate their code.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Some users are hitting an exception somewhere in crawl_transid, which
forces bees to return back to the transid_max calculation over and over.
Also out-of-range transids.
Add some BEESTRACE so we can see what we were doing in the exception
handler.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Currently if crawl throws an exception, we don't have basic information
about what was being crawled or even if the crawler was running at all.
These traces also help identify the causes of early exception failures.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
This might be interesting information, though most of the motivation for
this evaporated when kernel 5.7 came out.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
There seem to be multiple ways to do readahead in Linux, and only some
of them work. Hopefully reading the actual data is one of them.
This is an attempt to avoid page-by-page reads in the generic dedupe code.
We load both extents into the VFS cache (read sequentially) and hope they
are still there by the time we call dedupe on them.
We also call readahead(2) and hopefully that either helps or does nothing.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>