qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v7 00/12] rdma: migration support


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH v7 00/12] rdma: migration support
Date: Thu, 13 Jun 2013 16:06:38 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130514 Thunderbird/17.0.6

Il 13/06/2013 10:55, Michael R. Hines ha scritto:
> On 06/13/2013 10:26 AM, Chegu Vinod wrote:
>>>
>>> 1. start QEMU with the lock option *first*
>>> 2. Then enable x-rdma-pin-all
>>> 3. Then perform the migration
>>>
>>> What happens here? Does pinning "in advance" help you?
>>
>> Yes it does help by avoiding the freeze time at the start of the
>> pin-all migration.
>>
>> I already mentioned about this in in my earlier responses as an option
>> to consider for larger guests
>> (https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00435.html).
>>
>> But pinning all of guest memory has a few drawbacks...as you may
>> already know.
>>
>> Just to be sure I just double checked it again with your v7 bits .
>> Started a 64GB/10VCPU guest  (started qemu with the"-realtime
>> mlock=on" option) and as expected the guest startup took about 20
>> seconds longer (i.e. time taken to mlock the 64GB of guest RAM) but
>> the pin-all migration started fine...i.e. didn't observe any freezes
>> at the start of the migration
>>
>>
> (CC-ing qemu-devel).
> 
> OK, that's good to know. This means that we need to bringup the mlock()
> problem as a "larger" issue in the linux community instead of the QEMU
> community.
> 
> In the meantime, how about I make update to the RDMA patch which does
> the following:
> 
> 1. Solution #1:
>        If user requests "x-rdma-pin-all", then
>             If QEMU has enabled "-realtime mlock=on"
>                    Then, allow the capability
>             Else
>                   Disallow the capability
> 
> 2. Solution #2:  Create NEW qemu monitor command which locks memory *in
> advance*
>                           before the migrate command occurs, to clearly
> indicate to the user
>                           that the cost of locking memory must be paid
> before the migration starts.
> 
> Which solution do you prefer? Or do you have alternative idea?

Let's just document it in the release notes.  There's time to fix it.

Regarding the timestamp problem, it should be fixed in the RDMA code.
You did find a bug, but xyz_start_outgoing_migration should be
asynchronous and the pinning should happen in the setup phase.  This is
because the setup phase is already running outside the big QEMU lock and
the guest would not be frozen.

I think the patches are ready for merging, because incremental work
makes it easier to discuss the changes(*) but you really need to do two
things before 1.6, or I would rather revert them.

(1) move the pinning to the setup phase

(2) add a debug mode where every pass unpins all the memory and
restarts.  Speed doesn't matter, this is so that the protocol supports
it from the beginning, and any caching heuristics need to be done on the
source side.  As all debug modes, it will be somewhat prone to bitrot,
but at least there is a reference implementation for anyone who laters
wants to add caching.

I think (2) is very important so that, for example, during fault
tolerance you can reduce a bit the pinned size for smaller workloads,
even without ballooning.

    (*) for example, why the introduction of acct_update_position?  Is
    it a fix for a bug that always existed, or driven by some other
    changes?

Paolo




reply via email to

[Prev in Thread] Current Thread [Next in Thread]