qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] ram_save_live: add a no-progress convergence ru


From: Glauber Costa
Subject: Re: [Qemu-devel] [PATCH] ram_save_live: add a no-progress convergence rule
Date: Tue, 19 May 2009 11:09:27 -0400
User-agent: Mutt/1.4.2.2i

On Tue, May 19, 2009 at 05:59:14PM +0300, Dor Laor wrote:
> Glauber Costa wrote:
> >On Tue, May 19, 2009 at 08:00:48AM -0500, Anthony Liguori wrote:
> >  
> >>Uri Lublin wrote:
> >>    
> >>>Currently the live-part (section QEMU_VM_SECTION_PART) of
> >>>ram_save_live has only one convergence rule, which is
> >>>when the number of dirty pages is smaller than a threshold.
> >>>
> >>>When the guest uses more memory pages than the threshold (e.g.
> >>>playing a movie, copying files, sending/receiving many packets),
> >>>it may take a very long time before convergence according to
> >>>this rule.
> >>>
> >>>This patch (re)introduces a no-progress convergence rule, which limit
> >>>the number of times the migration process is not progressing
> >>>(and even regressing), with regards to the number of dirty
> >>>pages. No-progress means that the number of pages that got
> >>>dirty is larger than the number of pages that got transferred
> >>>to the destination during the last transfer.
> >>>This rule applies only after the first round (in which most
> >>>memory pages are being transferred).
> >>>
> >>>Also this patch enlarges the number-dirty-pages threshold (of
> >>>the first convergence rule) to 50 pages (was 10)
> >>>
> >>>Signed-off-by: Uri Lublin <address@hidden>
> >>> 
> >>>      
> >>The right place to do this is in a management tool.  An arbitrary 
> >>convergence rule of 50 can do more damage than good.
> >>
> >>For some set of users, it's better that live migration fail than it 
> >>cause an arbitrarily long pause in the guest which can result in dropped 
> >>TCP connections, soft lock ups, and other badness.
> >>
> >>A management tool can force convergence by issuing a "stop" command in 
> >>the monitor.  I suspect a management tool cares more about wall-clock 
> >>time than number of iterations too so a valid metric would be something 
> >>along the lines of if not converged after N seconds, issue stop monitor 
> >>command where N is calculated from available network bandwidth and guest 
> >>memory size.
> >>    
> >Another possibility is for the management tool to increase the bandwidth 
> >for
> >little periods if it perceives that no progress is being made.
> >
> >Anyhow, I completely agree that we should not introduce this in qemu.
> >
> >However, maybe we could augment our "info migrate" to provide more info 
> >about
> >the internal state of migration, so the mgmt tool can take a more informed
> >decision?
> >
> >  
> The problem is that if migration is not progressing since the guest is 
> dirtying pages
> faster than the migration protocol can send, than we just waist time and 
> cpu.
> The minimum is to notify the monitor interface in order to let mgmt 
> daemon to trap it.
> We can easily see this issue while running iperf in the guest or any 
> other high load/dirty
> pages scenario.
I know that, seen myself. What I believe and insist is only that qemu does not
really have to possess the knowledge to deal with it. Providing migration
stats in info migrate seems to me the better thing to do, rather than one single
one size fits all notification. The mgmt tool then can take the appropriate 
action
depending on the scenario it has in mind.
 
> We can also make it configurable using the monitor migrate command. For 
> example:
> migrate -d -no_progress -threshold=x tcp:....
it can be done, but it fits better as a different monitor command

anthony, do you have any strong opinions here, or is this scheme acceptable?




reply via email to

[Prev in Thread] Current Thread [Next in Thread]