qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v7 00/12] rdma: migration support


From: Michael R. Hines
Subject: Re: [Qemu-devel] [PATCH v7 00/12] rdma: migration support
Date: Thu, 13 Jun 2013 10:45:54 -0400
User-agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20130329 Thunderbird/17.0.5

On 06/13/2013 09:50 AM, Chegu Vinod wrote:

Attempted to migrate a smaller guest 10Vcpu/64GB (the guest was just idle) with the pin-all option.

It took ~20 sec to do the pin of the guest's RAM (this is the time when the guest is "frozen") and then the actual migration started... and took about ~26 secs to complete.. i.e. "info migrate" reported the total migration time as ~26secs.

From a user point of view the total clock time from when the migration operation was actually initiated to the time the guest resumed on the target host it was : ~20 + ~26 = ~46 secs ...hence my question.


(CC'ing qemu-devel, now.)

Ah, ok, yes, I see now - that's a bug that I would recommend reporting to the QEMU maintainer, actually:

Here is the sequence of events inside of QEMU:

1. issue the migrate command on the QEMU monitor:
2. qmp_migrate() gets called
3. (tcp|rdma|unix|etc)_start_outgoing_migration() gets called <= pinning occurs here 4. start migration_thread() pthread() <= take first timestamp 5. migration complete <= take another timestamp and subtract for total time
6. exit migration_thread()

The problem, as you can see is that "take first timestamp" needs to happen earlier in step #2.

This is definitely a "nuisance", but not specific to RDMA, and I think a patch should be submitted, probably
by one of the maintainers which moves the timestamp up to a higher level.

Does that make sense?

- Michael





reply via email to

[Prev in Thread] Current Thread [Next in Thread]