From e87f6e9649e2bdf056ae41bdfd6408449da42339 Mon Sep 17 00:00:00 2001 From: Zygo Blaxell Date: Mon, 21 Jul 2025 21:01:00 -0400 Subject: [PATCH] readahead: ignore large and unproductive readahead requests Sometimes there are absurdly large readahead requests (e.g. 32G), which tie up a thread holding the readahead lock for a long time (not to mention the IO the reading hammers the rest of the system with). These are likely an artifact of the legacy ExtentWalker code interacting with concurrent filesystem changes. The maximum btrfs extent size is 128M, so cap the length of readahead requests at that size. Signed-off-by: Zygo Blaxell --- src/bees.cc | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/bees.cc b/src/bees.cc index 1130bed..a4b0b0c 100644 --- a/src/bees.cc +++ b/src/bees.cc @@ -253,7 +253,7 @@ bees_readahead_nolock(int const fd, const off_t offset, const size_t size) // The btrfs kernel code does readahead with lower ioprio // and might discard the readahead request entirely. BEESNOTE("emulating readahead " << name_fd(fd) << " offset " << to_hex(offset) << " len " << pretty(size)); - auto working_size = size; + auto working_size = min(size, uint64_t(128 * 1024 * 1024)); auto working_offset = offset; while (working_size) { // don't care about multithreaded writes to this buffer--it is garbage anyway