When a toxic extent is discovered, insert the offending hash/address/toxic
entry into the hash table.
When a previously discovered toxic extent is encountered, do nothing,
i.e. allow the offending hash/address/toxic entry in the hash table
to expire.
Previously both inserts were removed from the code, but the former one
is required. The latter prevents bees from forgiving toxic extents
(or any hash matching one) should they be relocated, deleted, or simply
become non-toxic.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
GCC 7 and higher turn a previous warning into an error for implicit
fallthrough. Let's hint the compiler that this is intentional here.
Signed-off-by: Kai Krakow <kai@kaishome.de>
(cherry picked from commit 270a91cf17)
GCC 7 and higher turn a previous warning into an error for implicit
fallthrough. Let's hint the compiler that this is intentional here.
Signed-off-by: Kai Krakow <kai@kaishome.de>
(cherry picked from commit 270a91cf17)
To make bees more friendly to use with syslog/systemd, we add an option
to omit timestamps from the log output.
Signed-off-by: Kai Krakow <kai@kaishome.de>
This commit adds a simple getopt options parser to show help. This can
be used as a boilerplate for adding more options later.
Signed-off-by: Kai Krakow <kai@kaishome.de>
Remove a number of #if 0's.
Remove the redundant thread yield after implementing the same or better
in LockSet.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
When an extent ref is modified, all of the refs in the same metadata
page get the same transid in the TREE_SEARCH_V2 header. All of the
extents are rescanned by later subvol scans. This causes up to 80%
overhead due to redundant reads of the same extents.
A proper fix for this requires extent-based scanning instead of
extent-ref-based scanning. Until that happens, filter out new references
to old extents.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
BEESNOTE puts a message on the status message stack. BEESINFO logs a
message with rate limiting. The message that was flooding the logs
was coming from BEESINFO not BEESNOTE.
Fix earlier commit which removed the wrong message.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
After a few hundred subvol threads start running, the inode cache starts
to thrash, and the log gets spammed with messages of the form:
"open_root_nocache <subvolid>: <path>"
Ideally there would be some way to schedule work to minimize inode
thrashing. Until that gets done, just silence the messages for now.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
After a few hundred subvol threads start running, the inode cache starts
to thrash, and the log gets spammed with messages of the form:
"open_root_nocache <subvolid>: <path>"
Ideally there would be some way to schedule work to minimize inode
thrashing. Until that gets done, just silence the messages for now.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
With many threads it is inconvenient to reassemble the elided parts of
the dedup src/dst and scan filenames output. Simply output them
unconditionally, and balance the line lengths.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Tune the concurrency model to work a little better with large numbers
of subvols. This is much less than the full rewrite Bees desparately
needs, but it provides a marginal improvement until the new code is ready.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
With many threads it is inconvenient to reassemble the elided parts of
the dedup src/dst and scan filenames output. Simply output them
unconditionally, and balance the line lengths.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
This code has been #if 0 for a long time, and it seems unlikely it
will ever be useful in the future.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
If we lose a race and open the wrong file, we will not retry with the
next path if the file we opened had incompatible flags. We need to keep
trying paths until we open the correct file or run out of paths.
Fix by moving the inode flag check after the checks for file identity.
Output attributes in hex to be consistent with other attribute error
messages.
There is no need to report root and file paths separately in the error
message for incompatible flags because we have confirmed the identity of
the file before the incompatible flag error is detected. Other messages
in this loop still output root path and file_path separately because
the identity of 'rv' is unknown at the time these messages are emitted.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
If you have a lot of or a few big nocow files (like vm images) which
contain a lot of potential deduplication candidates, bees becomes
incredibly slow running through a lot "invalid operation" exceptions.
Let's just skip over such files to get more bang for the buck. I did no
regression testing as this patch seems trivial (and I cannot imagine any
pitfalls either). The process progresses much faster for me now.
If we lose a race and open the wrong file, we will not retry with the
next path if the file we opened had incompatible flags. We need to keep
trying paths until we open the correct file or run out of paths.
Fix by moving the inode flag check after the checks for file identity.
Output attributes in hex to be consistent with other attribute error
messages.
There is no need to report root and file paths separately in the error
message for incompatible flags because we have confirmed the identity of
the file before the incompatible flag error is detected. Other messages
in this loop still output root path and file_path separately because
the identity of 'rv' is unknown at the time these messages are emitted.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
If you have a lot of or a few big nocow files (like vm images) which
contain a lot of potential deduplication candidates, bees becomes
incredibly slow running through a lot "invalid operation" exceptions.
Let's just skip over such files to get more bang for the buck. I did no
regression testing as this patch seems trivial (and I cannot imagine any
pitfalls either). The process progresses much faster for me now.
This helps identify causes of the "same physical address in dedup"
exception.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit cc7b4f22b5)
BLOCK_SIZE_MIN_EXTENT_DEFRAG, BLOCK_SIZE_MIN_EXTENT_SPLIT, and others
are no longer used. Remove them.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit a3d7032eda)
Add time spent in file create and copy operations to the stats.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit f01c20f972)
A BEESTRACE closure could throw an exception. Trap those so we don't
end up in terminate().
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit 59660cfc00)
Reads can block indefinitely due to bugs, low io priority, or poor
storage performance. Record the block origin data in the thread state
so we can see which reads are problematic.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit f56f736d28)
Use () instead of [] when the respective end of the byte range touches
the beginning or end of the file. Also omit the '0' at beginning of
file.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit 3023b7f57a)
Use a different character to make it easier to search for bytenr ranges
in the logs.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit d43199e3d6)
This will allow the default size limit for cache objects to be changed
with impunity.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
(cherry picked from commit 9daa51edaa)
BLOCK_SIZE_MIN_EXTENT_DEFRAG, BLOCK_SIZE_MIN_EXTENT_SPLIT, and others
are no longer used. Remove them.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Reads can block indefinitely due to bugs, low io priority, or poor
storage performance. Record the block origin data in the thread state
so we can see which reads are problematic.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
Use () instead of [] when the respective end of the byte range touches
the beginning or end of the file. Also omit the '0' at beginning of
file.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
All testing so far incidates more crawlers go faster up to a limit
much larger than btrfs's performance limitations on subvols, even on
spinning rust. Remove the artificial constraint.
Signed-off-by: Zygo Blaxell <bees@furryterror.org>