qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/17 v2] Localhost migration with side channel f


From: Lei Li
Subject: Re: [Qemu-devel] [PATCH 0/17 v2] Localhost migration with side channel for ram
Date: Fri, 25 Oct 2013 20:24:07 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0

On 10/25/2013 03:30 PM, Paolo Bonzini wrote:
Il 25/10/2013 06:58, Lei Li ha scritto:
Right now just has inaccurate numbers without the new vmsplice, which
based on
the result from info migrate, as the guest ram size increases, although the
'total time' is number of times less compared with the current live
migration, but the 'downtime' performs badly.
Of course.
For a 1GB ram guest,

total time: 702 milliseconds
downtime: 692 milliseconds

And when the ram size of guest increasesexponentially, those numbers are
proportional to it.
I will make a list of the performance with the new vmsplice later, I am
sure it'd be much better than this at least.
Yes, please.  Is the memory usage is still 2x without vmsplice?

I think you have a nice proof of concept, but on the other hand this
probably needs to be coupled with some kind of postcopy live migration,
that is:

* the source starts sending data

* but the destination starts running immediately

* if the machine needs a page that is missing, the destination asks the
source to send it

* as soon as it arrives, the destination can restart

Using postcopy is problematic for reliability: if the destination fails,
the virtual machine is lost because the source doesn't have the latest
content of memory.  However, this is a much, much smaller problem for
live QEMU upgrade where the network cannot fail.

If you do this, you can achieve pretty much instantaneous live upgrade,
well within your original 200 ms goals.  But the flipping code with
vmsplice should be needed anyway to avoid doubling memory usage, and

Yes, I have read the postcopy migration patches, it does perform very
good on downtime, as just send the vmstates then switch the execution
to destination host. And as you pointed out, it can not avoid
doubling memory usage.

The numbers list above are based on the old vmsplice as I have not yet
worked on the benchmark for performance, it actually copys data rather
than moving. As the feedback for this version is positive, now I am
trying to get a real result out with the new vmsplice.

BTW, kernel side is looking for huge page solution for the improvement of
performance.

The recently patches from kernel as link,

http://article.gmane.org/gmane.linux.kernel/1574277

it's looking pretty good in this version already!  I'm relieved that the
RDMA code was designed right!

I am happy with it too. :)
Those RDMA hooks really make thingsmore flexible!


Paolo



--
Lei




reply via email to

[Prev in Thread] Current Thread [Next in Thread]