mirror of
https://github.com/Zygo/bees.git
synced 2025-07-01 00:02:27 +02:00
hash: don't spin when writes fail
When a hash table write fails, we skip over the write throttling because we didn't report that we successfully wrote an extent. This can be bad if the filesystem is full and the allocations for writes are burning a lot of CPU time searching for free space. We also don't retry the write later on since we assume the extent is clean after a write attempt whether it was successful or not, so the extent might not be written out later when writes are possible again. Check whether a hash extent is dirty, and always throttle after attempting the write. If a write fails, leave the extent dirty so we attempt to write it out the next time flush cycles through the hash table. During shutdown this will reattempt each failing write once, after that the updated hash table data will be dropped. Signed-off-by: Zygo Blaxell <bees@furryterror.org>
This commit is contained in:
@ -415,6 +415,7 @@ public:
|
||||
bool push_random_hash_addr(HashType hash, AddrType addr);
|
||||
void erase_hash_addr(HashType hash, AddrType addr);
|
||||
bool push_front_hash_addr(HashType hash, AddrType addr);
|
||||
bool flush_dirty_extent(uint64_t extent_index);
|
||||
|
||||
private:
|
||||
string m_filename;
|
||||
@ -474,7 +475,6 @@ private:
|
||||
void fetch_missing_extent_by_index(uint64_t extent_index);
|
||||
void set_extent_dirty_locked(uint64_t extent_index);
|
||||
size_t flush_dirty_extents(bool slowly);
|
||||
bool flush_dirty_extent(uint64_t extent_index);
|
||||
|
||||
size_t hash_to_extent_index(HashType ht);
|
||||
unique_lock<mutex> lock_extent_by_hash(HashType ht);
|
||||
|
Reference in New Issue
Block a user