qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/3] FVD: Added support for 'qemu-img update'


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH 1/3] FVD: Added support for 'qemu-img update'
Date: Sat, 29 Jan 2011 10:05:52 +0000

On Fri, Jan 28, 2011 at 9:26 PM, Chunqiang Tang <address@hidden> wrote:
>> It should be possible to change prefetching and copy-on-read while the
>> VM is running.  For example, having to shut down a VM in order to
>> pause the prefetching is not workable.  In the QED image streaming
>> tree there are monitor commands for this:
>>
>> http://repo.or.cz/w/qemu/stefanha.git/shortlog/refs/heads/stream-command
>
> I took a quick look at the code. Using a monitor command to dynamically
> control copy-on-read and prefetching is a good idea. This should be
> adopted in FVD as well.

After thinking about it more, qemu-img update does also serve a
purpose.  Sometimes it is necessary to set options on many images in
bulk or from provisioning scripts instead of at runtime.

I guess my main fear of qemu-img update is that it adds a new
interface that only FVD exploits so far.  If it never catches on with
other formats then we have this special feature that must be
maintained but is rarely used.  I'd hold off this patch until code
that can make use of it has been merged into qemu.git.

> On another note, I saw that the code does not support (copy_on_read=off &&
> stream=on). My previous benchmarking shows that copy_on_read does slow
> down other normal reads and writes, because it needs to save data to disk.
> For example, numbers in my papers show that, on ext3, FVD with
> copy_on_read=on actually boots a VM slower than QCOW2 does, even if FVD's
> copy-on-read is already heavily optimized and is not on the  critical path
> of read (i.e., callback is invoked and data is returned to the VM first,
> and then save copy-on-read data asynchronously in the background).
> Therefore, it might be possible that a user does not want to enable
> copy-on-read and only wants to do prefetching when resources are idle.

The current implementation basically takes advantage of copy-on-read
in order to populate the image.

There's a lot of room for studying the behavior and making
improvements.  Coming up with throttling strategies that make the
prefetch I/O an "idle task" only when there's bandwidth available is
difficult because the problem is more complex than just one greedy
QEMU process.  In a cloud environment there will be any physical
hosts, each with multiple VMs, on a shared network and no single QEMU
process has global knowledge.  It's more like TCP where you need to
try seeing how much data the connection can carry, fall back on packet
loss, and then gradually try again.  But I'm not sure we have a
feedback mechanism to say "you're doing too much prefetching".

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]