qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC Design Doc]Speed up live migration by skipping fre


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [RFC Design Doc]Speed up live migration by skipping free pages
Date: Thu, 24 Mar 2016 16:44:13 +0200

On Thu, Mar 24, 2016 at 02:33:15PM +0000, Li, Liang Z wrote:
> > > > > > > Agree. Current balloon just send 256 PFNs a time, that's too
> > > > > > > few and lead to too many times of virtio transmission, that's
> > > > > > > the main reason for the
> > > > > > bad performance.
> > > > > > > Change the VIRTIO_BALLOON_ARRAY_PFNS_MAX to a large value
> > can
> > > > > > improve
> > > > > > > the performance significant. Maybe we should increase it
> > > > > > > before doing the further optimization, do you think so ?
> > > > > >
> > > > > > We could push it up a bit higher: 256 is 1kbyte in size, so we
> > > > > > can make it 3x bigger and still fit struct virtio_balloon is a
> > > > > > single page. But if we are going to add the bitmap variant
> > > > > > anyway, we probably
> > > > shouldn't bother.
> > > > > >
> > > > > > > > > c. address translation and madvise() operation (24%,
> > > > > > > > > 1423ms)
> > > > > > > >
> > > > > > > > How is this split between translation and madvise?  I
> > > > > > > > suspect it's mostly madvise since you need translation when
> > > > > > > > using bitmap as
> > > > well.
> > > > > > > > Correct? Could you measure this please?  Also, what if we
> > > > > > > > use the new MADV_FREE instead?  By how much would this help?
> > > > > > > >
> > > > > > > For the current balloon, address translation is needed.
> > > > > > > But for live migration, there is no need to do address 
> > > > > > > translation.
> > > > > >
> > > > > > Well you need ram address in order to clear the dirty bit.
> > > > > > How would you get it without translation?
> > > > > >
> > > > >
> > > > > If you means that kind of address translation, yes, it need.
> > > > > What I want to say is, filter out the free page can be done by
> > > > > bitmap
> > > > operation.
> > > > >
> > > > > Liang
> > > >
> > > > OK so I see that your patches use block->offset in struct RAMBlock
> > > > to look up bits in guest-supplied bitmap.
> > > > I don't think that's guaranteed to work.
> > >
> > > It's part of the bitmap operation, because the latest change of the
> > ram_list.dirty_memory.
> > > Why do you think so? Could you tell me the reason?
> > >
> > > Liang
> > 
> > Sorry, why do I think what? That ram_addr_t is not guaranteed to equal GPA
> > of the block?
> > 
> 
> I mean why do you think that's can't guaranteed to work.
> Yes, ram_addr_t is not guaranteed to equal GPA of the block. But I didn't use 
> them as
> GPA. The code in the filter_out_guest_free_pages() in my patch just follow 
> the style of
> the latest change of  ram_list.dirty_memory[].
> 
> The free page bitmap got from the guest in my RFC patch has been filtered out 
> the
> 'hole', so the bit N of the free page bitmap and the bit N in 
> ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]->blocks are corresponding to
> the same guest page.  Right?
> If it's true, I think I am doing the right thing?
> 
> 
> Liang

There's no guarantee that there's a single 'hole'
even on the PC, and we want balloon to be portable.

So I'm not sure I understand what your patch is doing,
do you mean you pass the GPA to ram addr
mapping from host to guest?

That can be made to work but it's not a good idea,
and I don't see why would it be faster than doing
the same translation host side.


> > E.g. HACKING says:
> >     Use hwaddr for guest physical addresses except pcibus_t
> >     for PCI addresses.  In addition, ram_addr_t is a QEMU internal
> > address
> >     space that maps guest RAM physical addresses into an intermediate
> >     address space that can map to host virtual address spaces.
> > 
> > 
> > --
> > MST
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in the body 
> > of
> > a message to address@hidden More majordomo info at
> > http://vger.kernel.org/majordomo-info.html



reply via email to

[Prev in Thread] Current Thread [Next in Thread]