qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] migration: Fix rate limiting issue on RDMA migr


From: 858585 jemmy
Subject: Re: [Qemu-devel] [PATCH] migration: Fix rate limiting issue on RDMA migration
Date: Thu, 22 Mar 2018 19:57:49 +0800

On Wed, Mar 21, 2018 at 2:19 AM, Juan Quintela <address@hidden> wrote:
> Lidong Chen <address@hidden> wrote:
>> RDMA migration implement save_page function for QEMUFile, but
>> ram_control_save_page do not increase bytes_xfer. So when doing
>> RDMA migration, it will use whole bandwidth.
>>
>> Signed-off-by: Lidong Chen <address@hidden>
>
> Reviewed-by: Juan Quintela <address@hidden>
>
> This part of the code is a mess.
>
> To answer David:
> - pos: Where we need to write that bit of stuff
> - bytex_xfer: how much have we written
>
> WHen we are doing snapshots on qcow2, we store memory in a contiguous
> piece of memory, so we can "overwrite" that "page" if a new verion
> cames. Nothing else (except the block) uses te "pos" parameter, so we
> can't not trust on it.
>
> And that  has been for a fast look at the code, that I got really
> confused (again).

Hi Juan:
     what is the problem?
     Thanks.

>
>
>
>> ---
>>  migration/qemu-file.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/migration/qemu-file.c b/migration/qemu-file.c
>> index 2ab2bf3..217609d 100644
>> --- a/migration/qemu-file.c
>> +++ b/migration/qemu-file.c
>> @@ -253,7 +253,7 @@ size_t ram_control_save_page(QEMUFile *f, ram_addr_t 
>> block_offset,
>>      if (f->hooks && f->hooks->save_page) {
>>          int ret = f->hooks->save_page(f, f->opaque, block_offset,
>>                                        offset, size, bytes_sent);
>> -
>> +        f->bytes_xfer += size;
>>          if (ret != RAM_SAVE_CONTROL_DELAYED) {
>>              if (bytes_sent && *bytes_sent > 0) {
>>                  qemu_update_position(f, *bytes_sent);



reply via email to

[Prev in Thread] Current Thread [Next in Thread]