qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCHv5 06/10] migration: search for zero instead of d


From: Peter Lieven
Subject: Re: [Qemu-devel] [PATCHv5 06/10] migration: search for zero instead of dup pages
Date: Mon, 8 Apr 2013 10:50:20 +0200

Am 08.04.2013 um 10:49 schrieb Kevin Wolf <address@hidden>:

> Am 08.04.2013 um 10:33 hat Peter Lieven geschrieben:
>> 
>> Am 05.04.2013 um 21:23 schrieb Kevin Wolf <address@hidden>:
>> 
>>> Am 26.03.2013 um 10:58 hat Peter Lieven geschrieben:
>>>> virtually all dup pages are zero pages. remove
>>>> the special is_dup_page() function and use the
>>>> optimized buffer_find_nonzero_offset() function
>>>> instead.
>>>> 
>>>> here buffer_find_nonzero_offset() is used directly
>>>> to avoid the unnecssary additional checks in
>>>> buffer_is_zero().
>>>> 
>>>> raw performace gain checking 1 GByte zeroed memory
>>>> over is_dup_page() is approx. 10-12% with SSE2
>>>> and 8-10% with unsigned long arithmedtic.
>>>> 
>>>> Signed-off-by: Peter Lieven <address@hidden>
>>>> Reviewed-by: Orit Wasserman <address@hidden>
>>>> Reviewed-by: Eric Blake <address@hidden>
>>> 
>>> Okay, so I bisected again and this is the second patch that is involved
>>> in the slowness of qemu-iotests case 007.
>>> 
>> 
>> Can you try if the following solves your issue:
>> 
>> diff --git a/exec.c b/exec.c
>> index 786987a..54baa4a 100644
>> --- a/exec.c
>> +++ b/exec.c
>> @@ -1071,6 +1071,7 @@ ram_addr_t qemu_ram_alloc_from_ptr(ram_addr_t size, 
>> void *host,
>>             memory_try_enable_merging(new_block->host, size);
>>         }
>>     }
>> +    qemu_madvise(new_block->host, size, QEMU_MADV_DONTNEED);
>>     new_block->length = size;
>> 
>>     /* Keep the list sorted from biggest to smallest block.  */
> 
> It does. But perhaps Paolo's suggestion of using mmap() to allocate the
> memory would be better. I'm not sure how MADV_DONTNEED behaves on
> non-Linux.

its not guaranteed to zero memory.

Peter

> 
> Kevin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]