|
From: | Benoît Canet |
Subject: | Re: [Qemu-devel] [RFC V8 01/24] qcow2: Add journal specification. |
Date: | Wed, 3 Jul 2013 14:30:59 +0200 |
User-agent: | Mutt/1.5.21 (2010-09-15) |
> Care to explain that in more detail? Why shouldn't it work on spinning > disks? Hash are random they introduce random read access. With a QCOW2 cluster size of 4KB the deduplication code when writting duplicated data will do one random read per 4KB block to deduplicate. A server grade hardisk is rated for 250 iops. This traduce in 1MB/s of deduplicated data. Not very usable. On the contrary a samsung 840 pro SSD is rated for 80k iops of random read. That should traduce in 320MB/s of potentially deduplicated data. Havind dedup metadata on SSD and actual data on disk would solve the problem but it would need block backend. Benoît
[Prev in Thread] | Current Thread | [Next in Thread] |