qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC Design Doc]Speed up live migration by skipping fre


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [RFC Design Doc]Speed up live migration by skipping free pages
Date: Thu, 24 Mar 2016 12:29:07 +0200

On Thu, Mar 24, 2016 at 10:16:47AM +0000, Li, Liang Z wrote:
> > On Thu, Mar 24, 2016 at 01:19:40AM +0000, Li, Liang Z wrote:
> > > > > > > 2. Why not use virtio-balloon
> > > > > > > Actually, the virtio-balloon can do the similar thing by
> > > > > > > inflating the balloon before live migration, but its
> > > > > > > performance is no good, for an 8GB idle guest just boots, it
> > > > > > > takes about 5.7 Sec to inflate the balloon to 7GB, but it only
> > > > > > > takes 25ms to get a valid free page bitmap from the guest.
> > > > > > > There are some of reasons for the bad performance of
> > > > > > > vitio-balloon:
> > > > > > > a. allocating pages (5%, 304ms)
> > > > > >
> > > > > > Interesting. This is definitely worth improving in guest kernel.
> > > > > > Also, will it be faster if we allocate and pass to guest huge pages
> > instead?
> > > > > > Might speed up madvise as well.
> > > > >
> > > > > Maybe.
> > > > >
> > > > > > > b. sending PFNs to host (71%, 4194ms)
> > > > > >
> > > > > > OK, so we probably should teach balloon to pass huge lists in 
> > > > > > bitmaps.
> > > > > > Will be benefitial for regular balloon operation, as well.
> > > > > >
> > > > >
> > > > > Agree. Current balloon just send 256 PFNs a time, that's too few
> > > > > and lead to too many times of virtio transmission, that's the main
> > > > > reason for the
> > > > bad performance.
> > > > > Change the VIRTIO_BALLOON_ARRAY_PFNS_MAX to a large value can
> > > > improve
> > > > > the performance significant. Maybe we should increase it before
> > > > > doing the further optimization, do you think so ?
> > > >
> > > > We could push it up a bit higher: 256 is 1kbyte in size, so we can
> > > > make it 3x bigger and still fit struct virtio_balloon is a single
> > > > page. But if we are going to add the bitmap variant anyway, we probably
> > shouldn't bother.
> > > >
> > > > > > > c. address translation and madvise() operation (24%, 1423ms)
> > > > > >
> > > > > > How is this split between translation and madvise?  I suspect
> > > > > > it's mostly madvise since you need translation when using bitmap as
> > well.
> > > > > > Correct? Could you measure this please?  Also, what if we use
> > > > > > the new MADV_FREE instead?  By how much would this help?
> > > > > >
> > > > > For the current balloon, address translation is needed.
> > > > > But for live migration, there is no need to do address translation.
> > > >
> > > > Well you need ram address in order to clear the dirty bit.
> > > > How would you get it without translation?
> > > >
> > >
> > > If you means that kind of address translation, yes, it need.
> > > What I want to say is, filter out the free page can be done by bitmap
> > operation.
> > >
> > > Liang
> > 
> > OK so I see that your patches use block->offset in struct RAMBlock to look 
> > up
> > bits in guest-supplied bitmap.
> > I don't think that's guaranteed to work.
> 
> It's part of the bitmap operation, because the latest change of the 
> ram_list.dirty_memory.
> Why do you think so? Could you tell me the reason?
> 
> Liang

Sorry, why do I think what? That ram_addr_t is not guaranteed to equal GPA of 
the block?

E.g. HACKING says:
        Use hwaddr for guest physical addresses except pcibus_t
        for PCI addresses.  In addition, ram_addr_t is a QEMU internal address
        space that maps guest RAM physical addresses into an intermediate
        address space that can map to host virtual address spaces.


-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]