qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4] block/vdi: Use bdrv_flush after metadata upd


From: phoeagon
Subject: Re: [Qemu-devel] [PATCH v4] block/vdi: Use bdrv_flush after metadata updates
Date: Sat, 09 May 2015 07:41:10 +0000

Full Linux Mint (17.1) Installation with writeback:

With VDI extra sync 4min35s
Vanilla: 3min17s

which is consistent with 'qemu-img convert' (slightly less overhead due to some phases in installation is actually CPU bound).
Still much faster than other "sync-after-metadata" formats like VPC (vanilla VPC 7min43s)
The thing is he who needs to set up a new Linux system every day probably have pre-installed images to start with, and others just don't install an OS every day.



On Sat, May 9, 2015 at 2:39 PM Stefan Weil <address@hidden> wrote:
Am 09.05.2015 um 05:59 schrieb phoeagon:
BTW, how do you usually measure the time to install a Linux distro within? Most distros ISOs do NOT have unattended installation ISOs in place. (True I can bake my own ISOs for this...) But do you have any ISOs made ready for this purpose?

On Sat, May 9, 2015 at 11:54 AM phoeagon <address@hidden> wrote:
Thanks. Dbench does not logically allocate new disk space all the time, because it's a FS level benchmark that creates file and deletes them. Therefore it also depends on the guest FS, say, a btrfs guest FS allocates about 1.8x space of that from EXT4, due to its COW nature. It does cause the FS to allocate some space during about 1/3 of the test duration I think. But this does not mitigate it too much because a FS often writes in a stride rather than consecutively, which causes write amplification at allocation times.

So I tested it with qemu-img convert from a 400M raw file:
zheq-PC sdb # time ~/qemu-sync-test/bin/qemu-img convert -f raw -t unsafe -O vdi /run/shm/rand 1.vdi

real 0m0.402s
user 0m0.206s
sys 0m0.202s
zheq-PC sdb # time ~/qemu-sync-test/bin/qemu-img convert -f raw -t writeback -O vdi /run/shm/rand 1.vdi


I assume that the target file /run/shm/rand 1.vdi is not on a physical disk.
Then flushing data will be fast. For real hard disks (not SSDs) the situation is
different: the r/w heads of the hard disk have to move between data location
and the beginning of the written file where the metadata is written, so
I expect a larger effect there.

For measuring installation time of an OS, I'd take a reproducible installation
source (hard disk or DVD, no network connection) and take the time for
those parts of the installation where many packets are installed without
any user interaction. For Linux you won't need a stop watch, because the
packet directories in /usr/share/doc have nice timestamps.


Stefan


reply via email to

[Prev in Thread] Current Thread [Next in Thread]