qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH RDMA support v4: 08/10] introduce QEMUFileRD


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [RFC PATCH RDMA support v4: 08/10] introduce QEMUFileRDMA
Date: Tue, 19 Mar 2013 14:25:07 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130219 Thunderbird/17.0.3

Il 19/03/2013 14:12, Michael R. Hines ha scritto:
> On 03/19/2013 05:18 AM, Paolo Bonzini wrote:
>> Il 18/03/2013 21:33, Michael R. Hines ha scritto:
>>>> +int qemu_drain(QEMUFile *f)
>>>> +{
>>>> +    return f->ops->drain ? f->ops->drain(f->opaque) : 0;
>>>> +}
>>>> Hmm, this is very similar to qemu_fflush, but not quite. :/
>>>>
>>>> Why exactly is this needed?
>>> Good idea - I'll replace drain with flush once I added
>>> the  "qemu_file_ops_are(const QEMUFile *, const QEMUFileOps *) "
>>> that you recommended......
>> If I understand correctly, the problem is that save_rdma_page is
>> asynchronous and you have to wait for pending operations to do the
>> put_buffer protocol correctly.
>>
>> Would it work to just do the "drain" in the put_buffer operation, if and
>> only if it was preceded by a save_rdma_page operation?
> 
> Yes, the drain needs to happen in a few places already:
> 
> 1. During save_rdma_page (if the current "chunk" is full of pages)

Ok, this is internal to RDMA so no problem.

> 2. During the end of each iteration (now using qemu_fflush in my current
> patch)

Why?

> 3. And also during qemu_savem_state_complete(), also using qemu_fflush.

This would be caught by put_buffer, but (2) would not.

>>>>>    /** Flushes QEMUFile buffer
>>>>>     *
>>>>>     */
>>>>> @@ -723,6 +867,8 @@ int qemu_get_byte(QEMUFile *f)
>>>>>    int64_t qemu_ftell(QEMUFile *f)
>>>>>    {
>>>>>        qemu_fflush(f);
>>>>> +    if(migrate_use_rdma(f))
>>>>> +    return delta_norm_mig_bytes_transferred();
>>>> Not needed, and another undesirable dependency (savevm.c ->
>>>> arch_init.c).  Just update f->pos in save_rdma_page.
>>> f->pos isn't good enough because save_rdma_page does not
>>> go through QEMUFile directly - only non-live state goes
>>> through QEMUFile ....... pc.ram uses direct RDMA writes.
>>>
>>> As a result, the position pointer does not get updated
>>> and the accounting is missed........
>> Yes, I am suggesting to modify f->pos in save_rdma_page instead.
>>
>> Paolo
>>
> 
> Would that not confuse the other QEMUFile users?
> If I change that pointer (without actually putting bytes
> in into QEMUFile), won't the f->pos pointer be
> incorrectly updated?

f->pos is never used directly by QEMUFile, it is almost an opaque value.
 It is accumulated on every qemu_fflush (so that it can be passed to the
->put_buffer function), and returned by qemu_ftell; nothing else.

If you make somehow save_rdma_page a new op, returning a value from that
op and adding it to f->pos would be a good way to achieve this.

Paolo




reply via email to

[Prev in Thread] Current Thread [Next in Thread]