qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC Design Doc]Speed up live migration by skipping fre


From: Li, Liang Z
Subject: Re: [Qemu-devel] [RFC Design Doc]Speed up live migration by skipping free pages
Date: Fri, 25 Mar 2016 01:59:21 +0000

> > > > > > > > > The order I'm trying to understand is something like:
> > > > > > > > >
> > > > > > > > >     a) Send the get_free_page_bitmap request
> > > > > > > > >     b) Start sending pages
> > > > > > > > >     c) Reach the end of memory
> > > > > > > > >       [ is_ready is false - guest hasn't made free map yet ]
> > > > > > > > >     d) normal migration_bitmap_sync() at end of first pass
> > > > > > > > >     e) Carry on sending dirty pages
> > > > > > > > >     f) is_ready is true
> > > > > > > > >       f.1) filter out free pages?
> > > > > > > > >       f.2) migration_bitmap_sync()
> > > > > > > > >
> > > > > > > > > It's f.1 I'm worried about.  If the guest started
> > > > > > > > > generating the free bitmap before (d), then a page
> > > > > > > > > marked as 'free' in f.1 might have become dirty before
> > > > > > > > > (d) and so (f.2) doesn't set the dirty again, and so we can't
> filter out pages in f.1.
> > > > > > > > >
> > > > > > > >
> > > > > > > > As you described, the order is incorrect.
> > > > > > > >
> > > > > > > > Liang
> > > > > > >
> > > > > > >
> > > > > > > So to make it safe, what is required is to make sure no free
> > > > > > > list us outstanding before calling migration_bitmap_sync.
> > > > > > >
> > > > > > > If one is outstanding, filter out pages before calling
> > > > > migration_bitmap_sync.
> > > > > > >
> > > > > > > Of course, if we just do it like we normally do with
> > > > > > > migration, then by the time we call migration_bitmap_sync
> > > > > > > dirty bitmap is completely empty, so there won't be anything to
> filter out.
> > > > > > >
> > > > > > > One way to address this is call migration_bitmap_sync in the
> > > > > > > IO handler, while VCPU is stopped, then make sure to filter
> > > > > > > out pages before the next migration_bitmap_sync.
> > > > > > >
> > > > > > > Another is to start filtering out pages upon IO handler, but
> > > > > > > make sure to flush the queue before calling
> migration_bitmap_sync.
> > > > > > >
> > > > > >
> > > > > > It's really complex, maybe we should switch to a simple start,
> > > > > > just skip the free page in the ram bulk stage and make it
> asynchronous?
> > > > > >
> > > > > > Liang
> > > > >
> > > > > You mean like your patches do? No, blocking bulk migration until
> > > > > guest response is basically a non-starter.
> > > > >
> > > >
> > > > No, don't wait anymore. Like below (copy from previous thread)
> > > > --------------------------------------------------------------
> > > > 1. Set all the bits in the migration_bitmap_rcu->bmap to 1 2.
> > > > Clear all the bits in
> > > > ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]
> > > > 3. Send the get_free_page_bitmap request 4. Start to send  pages
> > > > to destination and check if the free_page_bitmap is ready
> > > >    if (is_ready) {
> > > >      filter out the free pages from  migration_bitmap_rcu->bmap;
> > > >      migration_bitmap_sync();
> > > >  }
> > > > continue until live migration complete.
> > > > ---------------------------------------------------------------
> > > > Can this work?
> > > >
> > > > Liang
> > >
> > > Not if you get the ready bit asynchronously like you wrote here
> > > since is_ready can get set while you called migration_bitmap_sync.
> > >
> > > As I said previously,
> > > to make this work you need to filter out synchronously while VCPU is
> > > stopped and while free pages from list are not being used.
> > >
> > > Alternatively prevent getting free page list and filtering them out
> > > from guest from racing with migration_bitmap_sync.
> > >
> > > For example, flush the VQ after migration_bitmap_sync.
> > > So:
> > >
> > >     lock
> > >     migration_bitmap_sync();
> > >     while (elem = virtqueue_pop) {
> > >         virtqueue_push(elem)
> > >         g_free(elem)
> > >     }
> > >     unlock
> > >
> > >
> > > while in handle_output
> > >
> > >     lock
> > >     while (elem = virtqueue_pop) {
> > >         list = get_free_list(elem)
> > >         filter_out_free(list)
> > >         virtqueue_push(elem)
> > >         free(elem)
> > >     }
> > >     unlock
> > >
> > >
> > > lock prevents migration_bitmap_sync from racing against
> > > handle_output
> >
> > I think the easier way is just to ignore the guests free list response
> > if it comes back after the first pass.
> >
> > Dave
> 
> That's a subset of course - after the first pass == after
> migration_bitmap_sync.
> 
> But it's really nasty - for example, how do you know it's the response from
> this migration round and not a previous one?

It's easy, add a request and response ID can solve this issue.

> It is really better to just keep things orthogonal and not introduce arbitrary
> limitations.
> 
> 
> For example, with post-copy there's no first pass, and it can still benefit 
> from
> this optimization.
> 

Leave this to Dave ...

Liang

> 
> > >
> > >
> > > This way you can actually use ioeventfd for this VQ so VCPU won't be
> > > blocked.
> > >
> > > I do not think this is so complex, and this way you can add requests
> > > for guest free bitmap at an arbitary interval either in host or in
> > > guest.
> > >
> > > For example, add a value that says how often should guest update the
> > > bitmap, set it to 0 to disable updates after migration done.
> > >
> > > Or, make guest resubmit a new one when we consume the old one, run
> > > handle_output about through a periodic timer on host.
> > >
> > >
> > > > > --
> > > > > MST
> > --
> > Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]