If we lose a race and open the wrong file, we will not retry with the
next path if the file we opened had incompatible flags. We need to keep
trying paths until we open the correct file or run out of paths.
Fix by moving the inode flag check after the checks for file identity.
Output attributes in hex to be consistent with other attribute error
messages.
There is no need to report root and file paths separately in the error
message for incompatible flags because we have confirmed the identity of
the file before the incompatible flag error is detected. Other messages
in this loop still output root path and file_path separately because
the identity of 'rv' is unknown at the time these messages are emitted.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
If you have a lot of or a few big nocow files (like vm images) which
contain a lot of potential deduplication candidates, bees becomes
incredibly slow running through a lot "invalid operation" exceptions.
Let's just skip over such files to get more bang for the buck. I did no
regression testing as this patch seems trivial (and I cannot imagine any
pitfalls either). The process progresses much faster for me now.
Keep track of the locking thread so we can see why we are deadlocked
in gdb.
Use a handle type for locks based on shared_ptr. Change the handle type
name to flush out any non-auto local variables.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit aa0b22d445664409c36503c6fd808bc49b6816d0)
This helps identify causes of the "same physical address in dedup"
exception.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit cc7b4f22b5df3a1f52d27060ee8a6a3352b8cd10)
BLOCK_SIZE_MIN_EXTENT_DEFRAG, BLOCK_SIZE_MIN_EXTENT_SPLIT, and others
are no longer used. Remove them.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit a3d7032edaf5fc584412d0dcf8773f1cafa8f2dc)
Add time spent in file create and copy operations to the stats.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit f01c20f97269083175a74d1a1fd3ebaced2d9560)
A BEESTRACE closure could throw an exception. Trap those so we don't
end up in terminate().
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit 59660cfc00b9ca233eeb1a7cdf6df34a45a2deba)
Reads can block indefinitely due to bugs, low io priority, or poor
storage performance. Record the block origin data in the thread state
so we can see which reads are problematic.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit f56f736d28970a0f03ee887a5bd5515cc749d413)
This lets us use more default constructors.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit 8a932a632ff4602a0357ed5fbcd3f86b6bc50283)
Use () instead of [] when the respective end of the byte range touches
the beginning or end of the file. Also omit the '0' at beginning of
file.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit 3023b7f57a3003242bc770bcfe55f666227680ff)
Use a different character to make it easier to search for bytenr ranges
in the logs.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit d43199e3d6e6469264eb10de8b0a783f8573e0e8)
This will allow the default size limit for cache objects to be changed
with impunity.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit 9daa51edaab44c02ce0917ff94b20683036d7594)
perf blames the SEARCH_V2 ioctl wrapper for a lot of time spent in malloc.
Use a thread_local buffer for ioctl results, and reuse it between runs.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit e509210428951e645d33916694a17aed1950991d)
In gcc 7+ warning: implicit-fallthrough has been added
In some places fallthrough is expectable, disable warning
Signed-off-by: Timofey Titovets <nefelim4ag@gmail.com>
Holding file FDs open for long periods of time delays inode destruction.
For very large files this can lead to excessive delays while bees dedups
data that will cease to be reachable.
Use the same workaround for file FDs (in the root_ino cache) that
is used for subvols (in the root cache): forcibly close all cached
FDs at regular intervals. The FD cache will reacquire FDs from files
that still have existing paths, and will abandon FDs from files that
no longer have existing paths. The non-existing-path case is not new
(bees has always been able to discover deleted inodes) so it is already
handled by existing code.
Fixes: https://github.com/Zygo/bees/issues/18
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
check_overflow() will invalidate iterators if it decides there are too
many cache entries.
If items are deleted from the cache, search for the inserted item again
to ensure the iterator is valid.
Increase size of timestamp to size_t.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Some whitespace fixes. Remove some duplicate code. Don't lock
two BeesStats objects in the - operator method.
Get the locking for T& at(const K&) right to avoid locking a mutex
recursively. Make the non-const version of the function private.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Before:
unique_lock<mutex> lock(some_mutex);
// run lock.~unique_lock() because return
// return reference to unprotected heap
return foo[bar];
After:
unique_lock<mutex> lock(some_mutex);
// make copy of object on heap protected by mutex lock
auto tmp_copy = foo[bar];
// run lock.~unique_lock() because return
// pass locally allocated object to copy constructor
return tmp_copy;
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Before:
unique_lock<mutex> lock(some_mutex);
// run lock.~unique_lock() because return
// return reference to unprotected heap
return foo[bar];
After:
unique_lock<mutex> lock(some_mutex);
// make copy of object on heap protected by mutex lock
auto tmp_copy = foo[bar];
// run lock.~unique_lock() because return
// pass locally allocated object to copy constructor
return tmp_copy;
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
If we release the lock first (and C++ destructor order says we do), then
the return value will be constructed from data living in an unprotected
container object. That data might be destroyed before we get to the
copy constructor for the return value.
Make a temporary copy of the return value that won't be destroyed by any
other thread, then unlock the mutex, then return the copy object.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Get rid of the ResourceHolder class.
Fix GCC static template member instantiation issues.
Replace assert() with exceptions.
shared_ptr can't seem to do reference counting in a multi-threaded
environment. The code looks correct (for both ResourceHandle and
std::shared_ptr); however, continual segfaults don't lie.
Carpet-bomb with mutex locks to reduce the likelihood of losing shared_ptr
races.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Found by valgrind. It was mostly harmless because the range of
usable values is limited by m_burst (which was initialized) and 0.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
"s_name" was a thread_local variable, not static, and did not require a
mutex to protect access. A deadlock is possible if a thread triggers an
exception with a handler that attempts to log a message (as the top-level
exception handler in bees does).
Remove multiple unnecessary mutex locks. Rename the thread_local variables
to make their scope clearer.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Add MADV_DONTDUMP to the list of advice flags.
There are now three flags which may or may not be supported by the
target kernel. Try each one and log its success or failure separately.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
The hash table statistics calculation in BeesHashTable::prefetch_loop
and the data-driven operation of the extent scanner always pulls the
hash table into RAM as fast as the disk will push the data. We never
use the prefetch rate limit, so remove it.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
This fixes a bug where bees tries to process itself as a btrfs filesystem.
This is a species of bug that I only notice *after* pushing to a public
git repo.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Extend the LockSet class so that the total number of locked (active)
items can be limited. When the limit is reached, no new items can be
locked until some existing locked items are unlocked.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Every git commit was causing bees.cc and bees-hash.cc to be rebuilt,
which was expensive and unnecessary.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>