Use a different character to make it easier to search for bytenr ranges
in the logs.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit d43199e3d6e6469264eb10de8b0a783f8573e0e8)
This will allow the default size limit for cache objects to be changed
with impunity.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit 9daa51edaab44c02ce0917ff94b20683036d7594)
perf blames the SEARCH_V2 ioctl wrapper for a lot of time spent in malloc.
Use a thread_local buffer for ioctl results, and reuse it between runs.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit e509210428951e645d33916694a17aed1950991d)
In gcc 7+ warning: implicit-fallthrough has been added
In some places fallthrough is expectable, disable warning
Signed-off-by: Timofey Titovets <nefelim4ag@gmail.com>
Holding file FDs open for long periods of time delays inode destruction.
For very large files this can lead to excessive delays while bees dedups
data that will cease to be reachable.
Use the same workaround for file FDs (in the root_ino cache) that
is used for subvols (in the root cache): forcibly close all cached
FDs at regular intervals. The FD cache will reacquire FDs from files
that still have existing paths, and will abandon FDs from files that
no longer have existing paths. The non-existing-path case is not new
(bees has always been able to discover deleted inodes) so it is already
handled by existing code.
Fixes: https://github.com/Zygo/bees/issues/18
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
check_overflow() will invalidate iterators if it decides there are too
many cache entries.
If items are deleted from the cache, search for the inserted item again
to ensure the iterator is valid.
Increase size of timestamp to size_t.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Some whitespace fixes. Remove some duplicate code. Don't lock
two BeesStats objects in the - operator method.
Get the locking for T& at(const K&) right to avoid locking a mutex
recursively. Make the non-const version of the function private.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Before:
unique_lock<mutex> lock(some_mutex);
// run lock.~unique_lock() because return
// return reference to unprotected heap
return foo[bar];
After:
unique_lock<mutex> lock(some_mutex);
// make copy of object on heap protected by mutex lock
auto tmp_copy = foo[bar];
// run lock.~unique_lock() because return
// pass locally allocated object to copy constructor
return tmp_copy;
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Before:
unique_lock<mutex> lock(some_mutex);
// run lock.~unique_lock() because return
// return reference to unprotected heap
return foo[bar];
After:
unique_lock<mutex> lock(some_mutex);
// make copy of object on heap protected by mutex lock
auto tmp_copy = foo[bar];
// run lock.~unique_lock() because return
// pass locally allocated object to copy constructor
return tmp_copy;
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
If we release the lock first (and C++ destructor order says we do), then
the return value will be constructed from data living in an unprotected
container object. That data might be destroyed before we get to the
copy constructor for the return value.
Make a temporary copy of the return value that won't be destroyed by any
other thread, then unlock the mutex, then return the copy object.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Get rid of the ResourceHolder class.
Fix GCC static template member instantiation issues.
Replace assert() with exceptions.
shared_ptr can't seem to do reference counting in a multi-threaded
environment. The code looks correct (for both ResourceHandle and
std::shared_ptr); however, continual segfaults don't lie.
Carpet-bomb with mutex locks to reduce the likelihood of losing shared_ptr
races.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Found by valgrind. It was mostly harmless because the range of
usable values is limited by m_burst (which was initialized) and 0.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
"s_name" was a thread_local variable, not static, and did not require a
mutex to protect access. A deadlock is possible if a thread triggers an
exception with a handler that attempts to log a message (as the top-level
exception handler in bees does).
Remove multiple unnecessary mutex locks. Rename the thread_local variables
to make their scope clearer.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Add MADV_DONTDUMP to the list of advice flags.
There are now three flags which may or may not be supported by the
target kernel. Try each one and log its success or failure separately.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
The hash table statistics calculation in BeesHashTable::prefetch_loop
and the data-driven operation of the extent scanner always pulls the
hash table into RAM as fast as the disk will push the data. We never
use the prefetch rate limit, so remove it.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
This fixes a bug where bees tries to process itself as a btrfs filesystem.
This is a species of bug that I only notice *after* pushing to a public
git repo.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Extend the LockSet class so that the total number of locked (active)
items can be limited. When the limit is reached, no new items can be
locked until some existing locked items are unlocked.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Every git commit was causing bees.cc and bees-hash.cc to be rebuilt,
which was expensive and unnecessary.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
The btrfs LOGICAL_INO ioctl has no way to report references to compressed
blocks precisely, so we must always consider all references to a
compressed block, and discard those that do not have the desired offset.
When we encounter compressed shared extents containing a mix of unique
and duplicate data, we attempt to replace all references to the mixed
extent with the same number of references to multiple extents consisting
entirely of unique or duplicate blocks. An early exit from the loop
in BeesResolver::for_each_extent_ref was stopping this operation early,
after replacing as few as one shared reference. This left other shared
references to the unique data on the filesystem, effectively creating
new dup data.
The failing pattern looks like this:
dedup: replace 0x14000..0x18000 from some other extent
copy: 0x10000..0x14000
dedup: replace 0x10000..0x14000 with the copy
[may be multiple dedup lines due to multiple shared references]
copy: 0x18000..0x1c000
[missing dedup 0x18000..0x1c000 with the copy here]
scan: 0x10000 [++++dddd++++] 0x1c000
If the extent 0x10000..0x1c000 is shared and compressed, we will make
a copy of the extent at 0x18000..1c0000. When we try to dedup this
copy extent, LOGICAL_INO will return a mix of references to the data
at logical 0x10000 and 0x18000 (which are both references to the
original shared extent with different offsets). If we break out
of the loop too early, we will stop as soon as a reference to 0x10000
is found, and ignore all other references to the extent we are trying
to remove.
The copy at the beginning of the extent (0x10000..0x14000) usually
works because all references to the extent cover the entire extent.
When bees performs the dedup at 0x14000..0x18000, bees itself creates
the shared references with different offsets.
Uncompressed extents were not affected because LOGICAL_INO can locate
physical blocks precisely if they reside in uncompressed extents.
This change will hurt performance when looking up old physical addresses
that belong to new data, but that is a much less urgent problem.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
btrfs provides a flush on rename when the rename target exists, so the
fsync is not necessary. In the initialization case (when the rename
target does not exist and the implicit flush does not occur), the file
may be empty or a hole after a crash. Bees treats this case the same
as if the file did not exist. Since this condition occurs for only the
first 15 minutes of the lifetime of a bees installation, it's not worth
bothering to fix.
If we attempt to fsync the file ourselves, on a crash with log replay,
btrfs will end up with a directory entry pointing to a non-existent inode.
This directory entry cannot be deleted or renamed except by deleting
the entire subvol. On large filesystems this bug is triggered by nearly
every crash (verified on kernels up to 4.5.7).
Remove the fsync to avoid the btrfs bug, and accept the failure mode
that occurs in the first 15 minutes after a bees install.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Previously, the scan order processed each subvol in order. This required
very large amounts of temporary disk space, as a full filesystem scan
was required before any shared extents could be deduped. If the hash
table RAM was underprovisioned this would mean some shared dup blocks
were removed from the hash table before they could be deduped.
Currently the scan order takes the first unscanned extent from each
subvol. This works well if--and only if--the subvols are either empty
or children of a common ancestor. It forces the same inode/offset pairs
to be read at close to the same time from each subvol.
When a new snapshot is created, this ordering diverts scanning to the
new subvol until it catches up to the existing subvols. For large
filesystems with frequent snapshot creation this means that the scanner
never reaches the end of all subvols. Each new subvol effectively
resets the current scan position for the entire filesystem to zero.
This prevents bees from ever completing the first filesystem scan.
Change the order again, so that we now read one unscanned extent from
each subvol in round-robin fashion. When a new subvol is created, we
share scan time between old and new subvols. This ensures we eventually
finish scanning initial subvols and enter the incremental scanning state.
The cost of this change is more repeated reading of shared extents at
scan time with less benefit from disk-device-level caching; however, the
only way to really fix this problem is to implement scanning on tree 2
(the btrfs extent tree) instead of the subvol trees.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
The thread name has an arbitrarily limited size, and we are eventually
removing support for multiple paths in a single bees daemon process.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
This gets rid of some more big memsets. It may replace them
with a lot of tiny mallocs, though. If this turns out to be
a bad idea then at least we can easily revert the change.
We really do need some large buffers for BtrfsIoctlSearchKey in some
cases, but we don't need to zero them out first. Don't do that so we
save some CPU.
Reduce the default buffer size to 4K because most BISK users don't get
need much more than 1K. Set the buffer size explicitly to the product of
the number of items and the desired item size in the places that really
need a lot of items.