1
0
mirror of https://github.com/Zygo/bees.git synced 2025-05-17 21:35:45 +02:00

146 Commits

Author SHA1 Message Date
Zygo Blaxell
bf2a014607 roots: improve "RO root 6094" message
This sequence of log messages isn't clear:

	crawl_master: WORKAROUND: Avoiding RO subvol 6094
	crawl_master: WORKAROUND: RO root 6094

The first is from a cache miss, and appears wherever a root is opened
(dedupe or crawl).  The second is skipping an entire subvol scan, and
only happens in crawl_master.

Elaborate on the second message a little.

Also use the term "root" consistently when referring to subvol tree IDs.
btrfs refers to these objects by (at least) three distinct names:  tree,
subvol, and root.  Using three different words for the same thing is worse
than using a single wrong word consistently to refer to the same concept.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-11-22 21:10:15 -05:00
Zygo Blaxell
cdca2bcdcd main: single BeesContext instance per process
After weeks of testing I copied part of a change to main without copying
the rest of the change, leading to an immediate segfault on startup.

So here is the rest of the change:  limit the number of
BeesContexts per process to 1.  This change was discussed at
https://github.com/Zygo/bees/issues/54#issuecomment-360332529 but there
are more reasons to do it now:  the candidates to replace the current
hash table format are less forgiving of sharing hash tables, and it may
even become necessary to have more than one hash table per BeesContext
instance (e.g. to keep datasum and nodatasum data separate).

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-11-22 20:40:30 -05:00
Zygo Blaxell
34b04f4255 bees: soft-limit computed thread counts to 8
https://github.com/Zygo/bees/issues/91 describes problems encountered
when running bees on systems with many CPU cores.

Limit the computed number of threads (using --thread-factor or the
default) to a maximum of 8 (i.e. the number of logical cores in a modern
laptop).  Users can override the limit by using --thread-count.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-11-21 21:49:16 -05:00
Zygo Blaxell
23f3e4ec42 workarounds: add workaround for btrfs send
Introduce --workaround options which trade performance or effectiveness to
avoid triggering kernel bugs.

The first such option is --workaround-btrfs-send, which avoids making any
modification to read-only subvols to avoid btrfs send bugs.

Clean up usage message:  no tabs for formatting, split options into
sections by theme.

Make scan mode a non-static data member like all (most?) other options.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-11-21 21:49:16 -05:00
Zygo Blaxell
e74122b512 resolver: don't log hash collision incidents
The log message is quite CPU-intensive to generate, and some data sets
have enough hash collisions to throw off benchmarks.

Keep the event counter but drop the log message.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-11-16 17:20:49 -05:00
Zygo Blaxell
e3247d3471 stats: streamline add_count
Perf was blaming BeesStats::add_count for >1% of instructions.

Trim the instruction count a little.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-11-08 23:31:50 -05:00
Kai Krakow
c69a954d8f
Makefile: Bring back -O3 in a downstream-compatible way
This commit brings back -O3 but in an overridable way. This should make
downstream distributions happy enough to accept it.

While at the subject, let's apply the same fixup logic to LDFLAGS, too.

This commit also properly gets rid of the implicit rules which collided
too easily with the depends.mk.

Signed-off-by: Kai Krakow <kai@kaishome.de>
2018-11-08 03:23:40 +01:00
Kai Krakow
f2dec480a6
Makefile: mkdir .depends only when needed
Signed-off-by: Kai Krakow <kai@kaishome.de>
2018-11-08 02:56:48 +01:00
Zygo Blaxell
c2762740ef context: remove limit on the number of references to an extent
Better toxic extent detection means we can now handle extents with
many more references--easily hundreds of thousands.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-11-05 21:12:11 -05:00
Zygo Blaxell
aa74a238b3 hash: remove preloaded toxic hash blacklist
Faster and more reliable toxic extent detection means we can now be much
less paranoid about creating toxic extents.

The paranoia has significant impact on dedupe hit rates because every
extent that contains even one toxic hash is abandoned.  The preloaded
toxic hashes were chosen because they occur more frequently than any
other block contents in typical filesystem data.  The combination of these
resulted in as much as 30% of duplicate extents being left untouched.

Remove the preloaded toxic extent blacklist, and rely on the new
kernel-CPU-usage-based workaround instead.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-10-31 23:03:01 -04:00
Zygo Blaxell
542371684c context: better detection for toxic extents
We detect toxic extents by measuring how long the LOGICAL_INO ioctl takes
to run.  If it is above some threshold, we consider the extent toxic,
and blacklist it; otherwise, we process the extent normally.

The detector was using the execution time of the ioctl, which detects
toxic extents, but it also detects pauses of the bees process and
transaction commit latency due to load.  This leads to a significant
number of false positives.  The detection threshold was also very long,
burning a lot of kernel CPU before the detection was triggered.

Use the per-thread system CPU statistics to measure the kernel CPU usage
of the LOGICAL_INO call directly.  This is much more reliable because it
is not confounded by other threads, and it's faster because we can set
the time threshold two orders of magnitude lower.

Also remove the lock and mutex added in "context: serialize LOGICAL_INO
calls" because we theoretically no longer need it (but leave the code
there with #if 0 in case we do need it in practice).

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-10-31 21:12:16 -04:00
Zygo Blaxell
9a97699dd9 roots: reimplement transid_max_nocache using extent tree root
ROOT_TREE contains the ROOT_ITEM for EXTENT_TREE.  Every modification
(that we care about) to a btrfs must go through EXTENT_TREE, and must
modify the page in ROOT_TREE pointing to the root of EXTENT_TREE...
which makes that a very good source for the filesystem transid.

Remove the loop and the root lookups, and just look at one item for
max_transid.

Also note that every caller of transid_max_nocache() immediately
feeds the return value to m_transid_re.update(), so don't do that
inside transid_max_nocache().

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-10-31 00:09:49 -04:00
Zygo Blaxell
0e8b591232 Revert "roots: simplify BeesRoots::transid_max_nocache"
It turns out that we do need to scan all the subvols in order
to find transid_max.

Keep the bug fix though.

This reverts commit bf6ae80eeec6afcbee505d22af8e62f60dc1c9a6.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-10-30 23:29:05 -04:00
Zygo Blaxell
bf6ae80eee roots: simplify BeesRoots::transid_max_nocache
BeesRoots::transid_max_nocache calls btrfs_get_root_transid() which
retrieves the transid of the root of the given Fd.  Since the FS_TREE
(subvol 5) is the root of the subvol hierarchy, it will always have
the highest transid on the filesystem, and we do not need to look at
any others.

Also fix a bug where we pass BTRFS_FS_TREE_OBJECTID instead of the
file descriptor root_fd() to btrfs_get_root_transid().  If BEESHOME
is somewhere on the same btrfs filesystem, and there are no leaked FDs
at bees startup, then BTRFS_FS_TREE_OBJECTID (5) usually has the same
integer value as a valid file descriptor of some object on the filesystem
that has a regularly increasing transid value.  If Fd 5 happens to be a
file in BEESHOME then bees itself drives the transid increments.  This,
combined with the search of all subvol roots, hides the bug (unless Fd
5 gets closed somehow).

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-10-30 21:12:17 -04:00
Zygo Blaxell
1a51bb53bf context: cache result of home_fd()
BeesContext::home_fd() is supposed to open $BEESHOME once and cache
the Fd for later calls; however, instead it was reopening a new Fd each
time it was called, and _also_ holding that Fd in a BeesContext member.
Fds clean themselves up when they are forgotten, so it was not leaking
per se, but it certainly had more open Fds than it needed to.

Check to see if we have m_home_fd open, and return that if so.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-10-30 21:12:16 -04:00
Zygo Blaxell
35b21687bc bees: drop unused member m_uuid
There is a m_root_uuid which is used.  m_uuid is not, so drop it
and save a tiny amount of memory.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-10-30 21:12:16 -04:00
Zygo Blaxell
63ddbb9a4f context: serialize LOGICAL_INO calls
LOGICAL_INO can trip over the btrfs slow-backrefs bug, resulting in
some very long in-kernel runtimes.  If too many threads are executing
LOGICAL_INO then there may be no cores left on the system to run other
tasks.

Toxic extent detection is done by a very rudimentary algorithm which
can be confused by unrelated sources of latency within btrfs (especially
commit latency).  The algorithm can also be confused by other threads
executing the LOGICAL_INO ioctl.

These are two good reasons to prevent any two threads in a single bees
process instance from executing LOGICAL_INO at the same time, so let's
do that.

It is possible to limit the number of threads executing LOGICAL_INO with
the -c and -C options; however, this also limits the number of threads
which can perform any operation, while only LOGICAL_INO (*) has such a
profound effect on the rest of system operation.

Also make the status message clearer about exactly when LOGICAL_INO is
executed, as opposed to merely waiting to acquire a lock before executing
the ioctl.

(*) or maybe FILE_EXTENT_SAME.  The problem function that keeps showing
up in kernel stack traces is find_parent_nodes, which is called by both
the LOGICAL_INO and FILE_EXTENT_SAME ioctls.  We'll try this change
first and see if it prevents any recurrences of forced watchdog reboots;
if it does not, then we'll limit FILE_EXTENT_SAME the same way.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-10-30 21:12:16 -04:00
Zygo Blaxell
373b9ef038 roots: fix subvol scan rollover on subvols with empty transid range
The ordering function for BeesCrawlState did not consider

	root 292 inode 0 min_transid 2345 max_transid 3456

to be larger than

	root 292 inode 258 min_transid 2345 max_transid 2345

so when we attempted to update the end pointer for the crawl progress,
the new state was not considered newer than the old state because the
min_transid was equal, but the new crawl state's inode number was smaller.

Normally this is not a problem because subvol scans typically begin
and end in separate transactions (in part because we don't start a
subvol scan until at least two transactions are available); however,
the cleanup code for the aftermath of the recent transid_min() bug can
create crawlers with equal max_transid and min_transid records.

Fix this by ordering both transid fields before any others in the
crawl state.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-10-30 21:12:14 -04:00
Zygo Blaxell
866a35c7fb roots: do not accept 18446744073709551615 as max_transid in beescrawl.dat
Due to an earlier bug some beescrawl.dat files will contain uint64_t
max as max_transid.  This prevents any further scanning on the subvol
because there is no possibiity of having a real transid (or any other
uint64_t number) larger than uint64_t max.

If we detect a bad transid in beescrawl.dat, log a warning, then use
some more plausible value:  either min_transid to repeat the previous
incremental crawl, or 0 to restart the subvol scan from the beginning.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-10-30 21:12:14 -04:00
Zygo Blaxell
90132182fd roots: do not allow transid_min to be numeric_limits<uint64_t>::max()
On a few test machines max_transid on subvols is getting set to
18446744073709551615 (aka uint64_t max).

Prevent transid_min() from ever returning this value.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-10-30 21:12:14 -04:00
Zygo Blaxell
90f98250c2 hash: remove pointless copy
"saved" is used only during hash table correctness analysis, which is
normally not enabled at compile time, and requires source modification
to enable.

Remove the pointless copy and save a tiny bit of CPU.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-10-19 20:21:04 -04:00
Zygo Blaxell
924008603e hash: reduce hash table extent size to 128KB
The 16MB hash table extent size did not serve any useful defragmentation
or compression purpose, and for very small filesystems (under 100GB),
16MB is much larger than necessary.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-10-19 20:21:04 -04:00
Zygo Blaxell
c01f129eee src: add bees-version.new.c to .gitignore
Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-10-19 20:21:04 -04:00
Kai Krakow
32d2739b0d
Makefile: Specify version when building from tarball
When package maintainers build from a tarball, the .git directory does
not exist to extract the version tag. Let's add a hack to work around
this issue and let them specify `BEES_VERSION="v0.y"` on the make
cmdline.

Github-Bug: https://github.com/Zygo/bees/issues/75
Signed-off-by: Kai Krakow <kai@kaishome.de>
2018-09-30 04:20:26 +02:00
Zygo Blaxell
9dbe2d6fee bees: add -G/--thread-min option for minimum thread count
The -g option limits the number of worker threads when the target load
average is exceeded.  On some systems the load normally runs high, and
continuous bees operation is required to avoid running out of disk space.

Add a -G/--thread-min option to force at least some threads to continue
running.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-09-14 23:50:07 -04:00
Zygo Blaxell
3d536ea6df roots: if queue is full run again
The task queue may already be full of tasks when the crawl task is
executed.  In this case simply reschedule the crawl task at the
end of the current queue.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-09-14 23:50:06 -04:00
Zygo Blaxell
e66086516f bees: dynamic thread pool size based on system load average
Add -g / --loadavg-target parameter to track system load and add or
remove bees worker threads dynamically to keep system load close to the
loadavg target.  Thread count may vary from zero to the maximum
specified by -c or -C, and is adjusted every 5 seconds.

This is better than implementing a similar load average scheme from
outside of the process (though that is still possible) because the
in-process load tracker does not disrupt the performance timing feedback
mechanisms as a freezer cgroup or SIGSTOP would when controlling bees
from outside.  The internal load average tracker can also adjust the
number of active threads while an external tracker can only choose from
the maximum or zero.

Also fix a bug where a Task could deadlock waiting for itself to exit
if it tries to insert a new Task after the number of worker threads has
been set to zero.

Also correct usage message for --scan-mode (values are 0..2) since
we are touching adjacent lines anyway.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-09-14 23:50:03 -04:00
Zygo Blaxell
96eb100ded bees: use readahead instead of posix_fadvise
Other btrfs utils use readahead() not posix_fadvise().

There does not appear to be a performance or correctness difference
between the three (none, posix_fadvise, or readahead()).

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-09-14 23:50:00 -04:00
Zygo Blaxell
041ad717a5 bees: configurable log verbosity
Log messages were already labelled with log levels, but there was no
way to filter by log level at run time.

Implement the filter inside the bees process so it can skip evaluation
of the BEESLOG* arguments if the log messages would not be emitted.

Fixes: https://github.com/Zygo/bees/issues/67

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-09-14 23:50:00 -04:00
Zygo Blaxell
b22db12390 context: log dedups with single unbroken log message
When BEESLOGINFO is called multiple times it generates separate log
records that can be mixed up when multiple threads dedup.

Use a single BEESLOGINFO call for each dedup to prevent this.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-09-14 23:50:00 -04:00
Zygo Blaxell
8bc4bee8a3 crucible: progress: drop the set() method
set() was broken and redundant.  Calling hold() and discarding the
returned object has the correct effect.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-09-14 23:49:54 -04:00
Zygo Blaxell
c3effe0a20 crawl: use custom order instead of (ab)using BeesFileRange::operator<
This makes the code clearer and keeps changes to BeesFileRange ordering
isolated.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-05-18 00:16:08 -04:00
Zygo Blaxell
f8c27f5c6a bees: revert TOXIC_INTERVAL back to pre-4.14 levels
Linux kernel 4.14, while resistant to extent toxicity, is not immune to it.

Go back to the paranoid setting to avoid tying up filesystems in
ridiculously long kernel loops in find_parent_nodes.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-05-18 00:16:08 -04:00
Zygo Blaxell
26039cd559 tempfile: update comments around bees_sync
Deadlock reproduced on kernel 4.14.34.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-05-18 00:16:04 -04:00
Zygo Blaxell
c21518d8ff stats: rename "chase_wrong_data" to "chase_no_data"
An empty BeesBlockData from the chasing algorithm used to mean that data
was found at the expected location but it does not match; however, there
are now other reasons for this and they occur much more often.  The name
is misleading.

Change the name to report more correctly what happens:  no data, without
any guess about the reason.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-03-01 00:01:13 -05:00
Zygo Blaxell
082f04818f BeesBlockData: fix data type issues
Not sure if these cause any problems, but they are theoretically
incorrect data types.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-02-28 23:58:28 -05:00
Zygo Blaxell
5bdad7fc93 crucible: progress: a progress tracker for worker queues
The task queue can become very large with many subvols, requiring hours
for the queue to clear.  'beescrawl.dat' saves in the meantime will save
the work currently scheduled, not the work currently completed.

Fix by tracking progress with ProgressTracker.  ProgressTracker::begin()
gives the last completed crawl position.  ProgressTracker::end() gives
the last scheduled crawl position.  begin() does not advance if there
is any item between begin() and end() is not yet completed.  In between
are crawled extents that are on the task queue but not yet processed.
The file 'beescrawl.dat' saves the begin() position while the extent
scanning task queue is fed from the end() position.

Also remove an unused method crawl_state_get() and repurpose the
operator<(BeesCrawlState) that nobody was using.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-02-28 23:49:39 -05:00
Zygo Blaxell
33d274eabd resolve: break up long intra-extent dedup loops
When both block candidates for dedup are located in the same extent, bees
excludes them from deduplication because the dedup operation would not
free any space (both blocks are still referenced, so neither is deleted).
Candidates in other extents are still considered.

Typically a few blocks are duplicated many thousands or even millions
of times within a filesystem.  Many of these blocks appear in the same
extent as each other.  In cases where an extent contains an extremely
common duplicate block, it may appear multiple times in many extents.
bees can get into a loop with a very bad worst-case running time:  32768
blocks per extent * 2560 bees reference limit * 256 distinct hash table
entries = 21.5 *billion* iterations...squared, because this loop happens
every time bees encounteres any of the references.  Not an infinite
number, but close enough.

In each iteration of the loop, replace_dst detects that both src and dst
block are part of the same btrfs extent data item and therefore should
not be deduped; however, this occurs after the block has been allocated
and read by chase_extent_ref.  This dst is discarded, but the outer
loop tries again with another reference to the same block and gets the
same result.

An easy fix for this problem is to stop the loop immediately when the
same physical extent is found in both src and dst.  The condition is rare
enough to ignore the negligible space efficiency loss, and filesystem
scan stops dead if the loop is allowed to proceed.  An exception is
thrown to terminate the loop at scan_one_extent from within replace_dst.

It would be better to determine the extent bytenr of each candidate
extent and filter them out in scan_one_extent (which reduces the number
of LOGICAL_INO calls as a side-effect), but bees has no code capable of
doing extent data tree lookups with backward iteration yet.  Even better
would be to change the hash table format so that the extent bytenr can
be decoded directly from the hash table entry (this already exists for
compressed extents).  Both of these changes are too large for v0.6.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-02-25 10:08:42 -05:00
Zygo Blaxell
8f0e88433e roots: get rid of common error messages, add more error counters
One very common case is losing a race to open a file that was deleted.
No need to spam the logs with mere ENOENT reports.

Other errors are more significant.  Log those with errno, and
add event counters to record them.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-02-07 23:12:01 -05:00
Zygo Blaxell
6aad124241 crawl: somebody should set max_transid
The previous commit had both max_transid assigments commented out.
It happens to work because we set max_transid in the constructor and
it doesn't change after that, but it's cleaner to assign it explicitly.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-01-31 22:52:12 -05:00
Zygo Blaxell
087ec26c44 crawl: filter extents correctly
When an extent ref is modified, all of the refs in the same metadata
page get the same transid in the TREE_SEARCH_V2 header.  This causes
two problems:

	- Extents with generation < min_transid are included if they
	happen to be referenced by pages with generation >= min_transid.

	- Extent refs with generation > max_transid are excluded even
	if they reference extents with generation <= max_transid.

Both of these are wrong:  the first causes some extents to be repeatedly
scanned, the second causes some extents to not be scanned at all.

Change the TREE_SEARCH_V2 parameters so that Crawl sees all extents
newer than min_transid (i.e. set max_transid to max).  The TREE_SEARCH_V2
kernel logic already operates this way, i.e. it fetches every page with
transid >= min_transid and discards newer items if they are too new for
max_transid.  Filter strictly by the extent reference generation field
(i.e. the copy of the extent generation that is in the extent reference).

Note this still scans extent data multiple times, but it should now
be exactly once per extent reference.  A proper fix for this requires
extent-based scanning instead of extent-ref-based scanning.

Formerly commit 5a8c655fc447c08772f01107a87e3364f093bb46 "roots: filter
out obsolete extents from extent refs" which landed in the subvol-threads
branch but not master.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-01-31 22:48:39 -05:00
Kai Krakow
408b6ae138 Code style: Fix wrong indentation
This had spaces instead of tabs by accident.

Signed-off-by: Kai Krakow <kai@kaishome.de>
2018-01-29 21:37:40 -05:00
Kai Krakow
5590fc0b13 Cmdline: Fix text alignment
Signed-off-by: Kai Krakow <kai@kaishome.de>
2018-01-29 21:37:40 -05:00
Kai Krakow
29d40ca359 Cmdline: Rename "relative-paths" to "strip-paths"
The previous name didn't match what this option really does.

Affects: #41

Signed-off-by: Kai Krakow <kai@kaishome.de>
2018-01-29 21:37:40 -05:00
Kai Krakow
b164717a25 Cmdline: Rename "notimestamps" to "no-timestamps"
That aligns better with the other options.

Signed-off-by: Kai Krakow <kai@kaishome.de>
2018-01-29 21:37:40 -05:00
Zygo Blaxell
af250f7732 roots: determine transid_max without open()ing every subvol root
Scan the roots tree directly for roots other than 5 (the FS root), and
use btrfs_get_root_transid on root_fd for root 5.  This avoids filling
up the root FD cache every time we want a new transid_max.  Now the only
reason we open a subvol root FD is to open a file within the subvol.

transid_max may be the same as the FS root's transid, in which case
the search loop is not necessary.  Place a counter (transid_max_miss)
to see if we ever need to look at root items. If this counter never goes
above zero, or does so very rarely, we can delete the search loop.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-01-29 21:37:39 -05:00
Zygo Blaxell
4f0bc78a4c crawl: don't block a Task waiting for new transids
Task should not block for extended periods of time.

Remove the RateEstimator::wait_for() in crawl_roots.  When crawl_roots
runs out of data, let the last crawl_task end without rescheduling.
Schedule crawl_task again on transid polls if it was not already running.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-01-29 21:37:39 -05:00
Zygo Blaxell
b67fba0acd log: BEESLOGNOTE doesn't do what we think it does
BEESLOGNOTE was intended to combine BEESLOG and BEESNOTE, i.e. write a
log message and set the task status message from a single expression.
With the log levels we would now need several more variants
(BEESLOGNOTEDEBUG, BEESLOGNOTEERR...) or a parameter (BEESNOTELOG(DEBUG,
...)).

Or we give up on the idea.  This combination was used only 3 times so far.
The log messages and the note message have different editorial styles.

Remove the three instances of BEESLOGNOTE, and make the BEESLOGNOTE
definition equvalent to BEESLOG at LOG_NOTICE level for consistency.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-01-29 21:37:38 -05:00
Zygo Blaxell
d367c6364c context: improve toxic match logs
Reword log message for discovery of new toxic extents vs. lookup of
previously known toxic extents.  Also add the block data (especially
filename) to the discovery message.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-01-29 00:48:06 -05:00
Zygo Blaxell
591a44e59a resolve: drop support for old-style compressed BeesAddr
No public version of bees ever created old-style compressed hash table
entries.  Remove the code that supports them.

Signed-off-by: Zygo Blaxell <bees@furryterror.org>
2018-01-29 00:48:06 -05:00