mirror of
				https://github.com/Zygo/bees.git
				synced 2025-11-03 19:50:34 +01:00 
			
		
		
		
	README: reintroduce new btrfs-send-compatibility workaround
Now it appears in both the github.io and github.com feature lists. Signed-off-by: Zygo Blaxell <bees@furryterror.org>
This commit is contained in:
		@@ -17,6 +17,7 @@ Strengths
 | 
				
			|||||||
 * Space-efficient hash table and matching algorithms - can use as little as 1 GB hash table per 10 TB unique data (0.1GB/TB)
 | 
					 * Space-efficient hash table and matching algorithms - can use as little as 1 GB hash table per 10 TB unique data (0.1GB/TB)
 | 
				
			||||||
 * Incremental realtime dedupe of new data using btrfs tree search
 | 
					 * Incremental realtime dedupe of new data using btrfs tree search
 | 
				
			||||||
 * Works with btrfs compression - dedupe any combination of compressed and uncompressed files
 | 
					 * Works with btrfs compression - dedupe any combination of compressed and uncompressed files
 | 
				
			||||||
 | 
					 * **NEW** [Works around `btrfs send` problems with dedupe and incremental parent shapshots](docs/options.md)
 | 
				
			||||||
 * Works around btrfs filesystem structure to free more disk space
 | 
					 * Works around btrfs filesystem structure to free more disk space
 | 
				
			||||||
 * Persistent hash table for rapid restart after shutdown
 | 
					 * Persistent hash table for rapid restart after shutdown
 | 
				
			||||||
 * Whole-filesystem dedupe - including snapshots
 | 
					 * Whole-filesystem dedupe - including snapshots
 | 
				
			||||||
 
 | 
				
			|||||||
@@ -17,13 +17,13 @@ Strengths
 | 
				
			|||||||
 * Space-efficient hash table and matching algorithms - can use as little as 1 GB hash table per 10 TB unique data (0.1GB/TB)
 | 
					 * Space-efficient hash table and matching algorithms - can use as little as 1 GB hash table per 10 TB unique data (0.1GB/TB)
 | 
				
			||||||
 * Incremental realtime dedupe of new data using btrfs tree search
 | 
					 * Incremental realtime dedupe of new data using btrfs tree search
 | 
				
			||||||
 * Works with btrfs compression - dedupe any combination of compressed and uncompressed files
 | 
					 * Works with btrfs compression - dedupe any combination of compressed and uncompressed files
 | 
				
			||||||
 | 
					 * **NEW** [Works around `btrfs send` problems with dedupe and incremental parent shapshots](options.md)
 | 
				
			||||||
 * Works around btrfs filesystem structure to free more disk space
 | 
					 * Works around btrfs filesystem structure to free more disk space
 | 
				
			||||||
 * Persistent hash table for rapid restart after shutdown
 | 
					 * Persistent hash table for rapid restart after shutdown
 | 
				
			||||||
 * Whole-filesystem dedupe - including snapshots
 | 
					 * Whole-filesystem dedupe - including snapshots
 | 
				
			||||||
 * Constant hash table size - no increased RAM usage if data set becomes larger
 | 
					 * Constant hash table size - no increased RAM usage if data set becomes larger
 | 
				
			||||||
 * Works on live data - no scheduled downtime required
 | 
					 * Works on live data - no scheduled downtime required
 | 
				
			||||||
 * Automatic self-throttling based on system load
 | 
					 * Automatic self-throttling based on system load
 | 
				
			||||||
 * **NEW** [Now works with `btrfs send`](options.md)
 | 
					 | 
				
			||||||
 | 
					
 | 
				
			||||||
Weaknesses
 | 
					Weaknesses
 | 
				
			||||||
----------
 | 
					----------
 | 
				
			||||||
 
 | 
				
			|||||||
		Reference in New Issue
	
	Block a user