qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 00/15] Make migration work with hotplug


From: Alex Williamson
Subject: Re: [Qemu-devel] [PATCH 00/15] Make migration work with hotplug
Date: Thu, 24 Jun 2010 09:23:03 -0600

On Thu, 2010-06-24 at 09:04 -0600, Alex Williamson wrote:
> On Thu, 2010-06-24 at 15:02 +0900, Yoshiaki Tamura wrote:
> > 
> > Hi Alex,
> > 
> > Is there additional overhead to save rams introduce by this series?
> > If so, how much?
> 
> Yes, there is overhead, but it's typically quite small.  If I migrate a
> 1G VM immediately after I boot to a login prompt (lots of zero pages), I
> get an overhead of 0.000076%.  That's only 226 extra bytes over the
> 297164995 bytes otherwise transferred.  If I build a kernel on the guest
> and migrate during the compilation, the overhead is 0.000019%.  The
> overhead is tiny largely due to patch 12/15, which avoids sending the
> block name if we're working within the same block as sent previously.
> If I disable this optimization, the overhead goes up to 0.93% after boot
> and 0.26% during a kernel compile.
> 
> Note that an x86 VM does a separate qemu_ram_alloc for memory above 4G,
> which means in bigger VMs we may end up needing to resend the ramblock
> name once in a while as we bounce between above and below 4G.  Worst
> case for this could match the 0.26% above, but in my testing during a
> kernel compile, this seems to increase the overhead to 0.000026% on a 6G
> VM.  I don't see any reason why we couldn't allocate all the ram in a
> single qemu_ram_alloc call, so I'll add another patch to make that
> change (which will also shorten the name to "pc.ram" for even less
> overhead ;).  Thanks,

FWIW, with this change, my migration during kernel compile on the 6G VM
seems to be running just at 0.000019%-0.000020%, so that eliminates the
penalty for bigger memory VMs.  Thanks,

Alex





reply via email to

[Prev in Thread] Current Thread [Next in Thread]