qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH RDMA support v4: 08/10] introduce QEMUFileRD


From: Michael R. Hines
Subject: Re: [Qemu-devel] [RFC PATCH RDMA support v4: 08/10] introduce QEMUFileRDMA
Date: Tue, 19 Mar 2013 14:27:59 -0400
User-agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20130106 Thunderbird/17.0.2

On 03/19/2013 10:22 AM, Paolo Bonzini wrote:
Il 19/03/2013 15:10, Michael R. Hines ha scritto:
On 03/19/2013 09:45 AM, Paolo Bonzini wrote:
This is because of downtime: You have to drain the queue anyway at the
very end, and if you don't drain it in advance after each iteration, then
the queue will have lots of bytes in it waiting for transmission and the
Virtual Machine will be stopped for a much longer period of time during
the last iteration waiting for RDMA card to finish transmission of all
those
bytes.
Shouldn't the "current chunk full" case take care of it too?

Of course if you disable chunking you have to add a different condition,
perhaps directly into save_rdma_page.
No, we don't want to flush on "chunk full" - that has a different meaning.
We want to have as many chunks submitted to the hardware for transmission
as possible to keep the bytes moving.
That however gives me an idea...  Instead of the full drain at the end
of an iteration, does it make sense to do a "partial" drain at every
chunk full, so that you don't have > N bytes pending and the downtime is
correspondingly limited?


Sure, you could do that, but it seems overly complex just to avoid
a single flush() call at the end of each iteration, right?

If there is no RAM migration in flight.  So you have

    migrate RAM
    ...
    RAM migration finished, device migration start
    put_buffer <<<<< QEMUFileRDMA triggers drain
    put_buffer
    put_buffer
    put_buffer
    ...

Ah, yes, ok. Very simple modification......




reply via email to

[Prev in Thread] Current Thread [Next in Thread]