qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] migration: optimize the downtime


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH] migration: optimize the downtime
Date: Tue, 25 Jul 2017 11:34:27 +0100
User-agent: Mutt/1.8.3 (2017-05-23)

* Jay Zhou (address@hidden) wrote:
> 
> On 2017/7/24 23:35, Dr. David Alan Gilbert wrote:
> > * Jay Zhou (address@hidden) wrote:
> > > Hi Dave,
> > > 
> > > On 2017/7/21 17:49, Dr. David Alan Gilbert wrote:
> > > > * Jay Zhou (address@hidden) wrote:
> > > > > Qemu_savevm_state_cleanup() takes about 300ms in my ram migration 
> > > > > tests
> > > > > with a 8U24G vm(20G is really occupied), the main cost comes from
> > > > > KVM_SET_USER_MEMORY_REGION ioctl when mem.memory_size = 0 in
> > > > > kvm_set_user_memory_region(). In kmod, the main cost is
> > > > > kvm_zap_obsolete_pages(), which traverses the active_mmu_pages list to
> > > > > zap the unsync sptes.
> > > > 
> > > > Hi Jay,
> > > >     Is this actually increasing the real downtime when the guest isn't
> > > > running, or is it just the reported time? I see that the s->downtime
> > > > value is calculated right after where we currently call
> > > > qemu_savevm_state_cleanup.
> > > 
> > > It actually increased the real downtime, I used the "ping" command to
> > > test. Reason is that the source side libvirt sends qmp to qemu to query
> > > the status of migration, which needs the BQL. qemu_savevm_state_cleanup
> > > is done with BQL, qemu can not handle the qmp if qemu_savevm_state_cleanup
> > > has not finished. And the source side libvirt delays about 300ms to notify
> > > the destination side libvirt to send the "cont" command to start the vm.
> > > 
> > > I think the value of s->downtime is not accurate enough, maybe we could
> > > move the calculation of end_time after qemu_savevm_state_cleanup has done.
> > 
> > I'm copying in Paolo, Radim and Andrea- is there anyway we can make the
> > teardown of KVMs dirty tracking not take so long? 300ms is a silly long time
> > on only a small VM.
> > 
> > > >     I guess the biggest problem is that 300ms happens before we restart
> > > > the guest on the source if a migration fails.
> > > 
> > > 300ms happens even if a migration succeeds.
> > 
> > Hmm, OK, this needs fixing then - it does explain a result I saw a while
> > ago where the downtime was much bigger with libvirt than it was with
> > qemu on it's own.
> > 
> > > > > I think it can be optimized:
> > > > > (1) source vm will be destroyed if the migration is successfully done,
> > > > >       so the resources will be cleanuped automatically by the system
> > > > > (2) delay the cleanup if the migration failed
> > > > 
> > > > I don't like putting it in qmp_cont; that shouldn't have migration magic
> > > > in it.
> > > 
> > > Yes, it is not a ideal place. :(
> > > 
> > > > I guess we could put it in migrate_fd_cleanup perhaps? It gets called on
> > > > a bh near the end -  or could we just move it closer to the end of
> > > > migration_thread?
> > > 
> > > I have tested putting it in migrate_fd_cleanup, but the downtime is not
> > > optimized. So I think it is the same to move it closer to the end of
> > > migration_thread if it holds the BQL.
> > > Could we put it in migrate_init?
> > 
> > Your explanation above hints as to why migrate_fd_cleanup doesn't help;
> > it's because we're still going to be doing it with the BQL taken.
> 
> Yes, it is.
> 
> > Can you tell me which version of libvirt you're using?
> 
> I'm using 1.3.4
> 
> > I thought the newer ones were supposed to use events so they did't
> > have to poll qemu.
> 
> After checking the codes of the newest libvirt, I think it is the same
> in the qemuMigrationWaitForCompletion function, which is used to poll
> qemu every 50ms.

Checking with Jiri Denemark (added to cc), newer libvirt should use
events when available - but that polling code is there to cope with
older qemu's.  So with a newer qemu, i think it should spot the
COMPLETED event.

Dave

> Thanks,
> Jay
> 
> >   If we move qemu_savevm_state_cleanup is it still safe? Are there
> > some things we're supposed to do at that point which are wrong if
> > we don't.
> > 
> > I wonder about something like;  take a mutex in
> > memory_global_dirty_log_start, release it in
> > memory_global_dirty_log_stop.  Then make ram_save_cleanup start
> > a new thread that does the call to memory_global_dirty_log_stop.
> > 
> > Dave
> > 
> 
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]