qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH] qcow2: add a readahead cache for qcow2_deco


From: Peter Lieven
Subject: Re: [Qemu-devel] [RFC PATCH] qcow2: add a readahead cache for qcow2_decompress_cluster
Date: Sat, 28 Dec 2013 16:35:51 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0

Am 27.12.2013 04:23, schrieb Fam Zheng:
> On 2013年12月27日 00:19, Peter Lieven wrote:
>> while evaluatiing compressed qcow2 images as a good basis for
>> virtual machine templates I found out that there are a lot
>> of partly redundant (compressed clusters have common physical
>> sectors) and relatively short reads.
>>
>> This doesn't hurt if the image resides on a local
>> filesystem where we can benefit from the local page cache,
>> but it adds a lot of penalty when accessing remote images
>> on NFS or similar exports.
>>
>> This patch effectevily implements a readahead of 2 * cluster_size
>> which is 2 * 64kB per default resulting in 128kB readahead. This
>> is the common setting for Linux for instance.
>>
>> For example this leads to the following times when converting
>> a compressed qcow2 image to a local tmpfs partition.
>>
>> Old:
>> time ./qemu-img convert 
>> nfs://10.0.0.1/export/VC-Ubuntu-LTS-12.04.2-64bit.qcow2 /tmp/test.raw
>> real    0m24.681s
>> user    0m8.597s
>> sys    0m4.084s
>>
>> New:
>> time ./qemu-img convert 
>> nfs://10.0.0.1/export/VC-Ubuntu-LTS-12.04.2-64bit.qcow2 /tmp/test.raw
>> real    0m16.121s
>> user    0m7.932s
>> sys    0m2.244s
>>
>> Signed-off-by: Peter Lieven <address@hidden>
>> ---
>>   block/qcow2-cluster.c |   27 +++++++++++++++++++++++++--
>>   block/qcow2.h         |    1 +
>>   2 files changed, 26 insertions(+), 2 deletions(-)
>
> I like this idea, but here's a question. Actually, this penalty is common to 
> all protocol drivers: curl, gluster, whatever. Readahead is not only good for 
> compression processing, but also quite helpful for boot: BIOS and GRUB may 
> send sequential 1 sector IO, synchronously, thus suffer from high latency of 
> network communication. So I think if we want to do this, we will want to 
> share it with other format and protocol combinations.
I had the same idea in mind. Not only high latency, but also high I/O load on 
the storage as reading sectors one by one produces high IOPS.
But we have to be very careful:
- Its likely that the OS already does a readahead so we should not put the 
complexity in qemu in this case.
- We definetely destroy zero copy functionality.

My idea would be that we only do a readahead if we observe a read smaller than 
n bytes and then maybe round up to this size. Maybe
we should only place this logic only in place if there is a 1 sector read and 
then read e.g. 4K. In any case this has to be an opt-in feature.

If I have some time I will collect some historgram of transfer size versus 
timing booting popular OSs.

Peter




reply via email to

[Prev in Thread] Current Thread [Next in Thread]