qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH RDMA support v4: 08/10] introduce QEMUFileRD


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [RFC PATCH RDMA support v4: 08/10] introduce QEMUFileRDMA
Date: Tue, 19 Mar 2013 15:22:04 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130219 Thunderbird/17.0.3

Il 19/03/2013 15:10, Michael R. Hines ha scritto:
> On 03/19/2013 09:45 AM, Paolo Bonzini wrote:
>> This is because of downtime: You have to drain the queue anyway at the
>> very end, and if you don't drain it in advance after each iteration, then
>> the queue will have lots of bytes in it waiting for transmission and the
>> Virtual Machine will be stopped for a much longer period of time during
>> the last iteration waiting for RDMA card to finish transmission of all
>> those
>> bytes.
>> Shouldn't the "current chunk full" case take care of it too?
>>
>> Of course if you disable chunking you have to add a different condition,
>> perhaps directly into save_rdma_page.
> 
> No, we don't want to flush on "chunk full" - that has a different meaning.
> We want to have as many chunks submitted to the hardware for transmission
> as possible to keep the bytes moving.

That however gives me an idea...  Instead of the full drain at the end
of an iteration, does it make sense to do a "partial" drain at every
chunk full, so that you don't have > N bytes pending and the downtime is
correspondingly limited?

>>>>> 3. And also during qemu_savem_state_complete(), also using
>>>>> qemu_fflush.
>>>> This would be caught by put_buffer, but (2) would not.
>>>>
>>> I'm not sure this is good enough either - we don't want to flush
>>> the queue *frequently*..... only when it's necessary for performance
>>> .... we do want the queue to have some meat to it so the hardware
>>> can write bytes as fast as possible.....
>>>
>>> If we flush inside put_buffer (which is called very frequently):
>> Is it called at any time during RAM migration?
> 
> I don't understand the question: the flushing we've been discussing
> is *only* for RAM migration - not for the non-live state.

Yes.  But I would like to piggyback the final, full drain on the switch
from RAM migration to device migration.

>> Can you make drain a no-op if there is nothing in flight?  Then every
>> call to put_buffer after the first should not have any overhead.
> 
> That still doesn't solve the problem: If there is nothing in flight,
> then there is no reason to call qemu_fflush() in the first place.

If there is no RAM migration in flight.  So you have

   migrate RAM
   ...
   RAM migration finished, device migration start
   put_buffer <<<<< QEMUFileRDMA triggers drain
   put_buffer
   put_buffer
   put_buffer
   ...

> The flushes we need are only for RAM, not the rest of it......
> 
> Make sense?

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]