qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 01/15] qemu coroutine: support bypass mode


From: Ming Lei
Subject: Re: [Qemu-devel] [PATCH 01/15] qemu coroutine: support bypass mode
Date: Sat, 2 Aug 2014 10:42:54 +0800

On Sat, Aug 2, 2014 at 12:03 AM, Stefan Hajnoczi <address@hidden> wrote:
> On Fri, Aug 01, 2014 at 10:52:55PM +0800, Ming Lei wrote:
>> On Fri, Aug 1, 2014 at 9:48 PM, Ming Lei <address@hidden> wrote:
>> > On Fri, Aug 1, 2014 at 9:13 PM, Stefan Hajnoczi <address@hidden> wrote:
>> >> On Fri, Aug 01, 2014 at 10:54:02AM +0800, Ming Lei wrote:
>> >>> On Fri, Aug 1, 2014 at 12:30 AM, Paolo Bonzini <address@hidden> wrote:
>> >>> > Il 31/07/2014 18:13, Ming Lei ha scritto:
>> >>> >> Follows 'perf report' result on cycles event for with/without bypass
>> >>> >> coroutine:
>> >>> >>
>> >>> >>     http://pastebin.com/ae0vnQ6V
>> >>> >>
>> >>> >> From the profiling result, looks bdrv_co_do_preadv() is a bit slow
>> >>> >> without bypass coroutine.
>> >>> >
>> >>> > Yeah, I can count at least 3.3% time spent here:
>> >>> >
>> >>> > 0.87%          bdrv_co_do_preadv
>> >>> > 0.79%          bdrv_aligned_preadv
>> >>> > 0.71%          qemu_coroutine_switch
>> >>> > 0.52%          tracked_request_begin
>> >>> > 0.45%          coroutine_swap
>> >>> >
>> >>> > Another ~3% wasted in malloc, etc.
>> >>>
>> >>> That should be related with coroutine and the BH in bdrv_co_do_rw().
>> >>> In this post I didn't apply Stephan's coroutine resize patch which might
>> >>> decrease usage of malloc() for coroutine.
>> >>
>> >> Please rerun with "[PATCH v3 0/2] coroutine: dynamically scale pool
>> >> size".
>> >
>> > No problem, will do that. Actually in my last post with rfc, this patchset
>> > was against your coroutine resize patches.
>> >
>> > I will provide the profile data tomorrow.
>>
>> Please see below link for without bypass coroutine, and with
>> your coroutine resize patches(V3):
>>
>>    http://pastebin.com/10y00sir
>
> Thanks for sharing!
>
> Do you have the results (IOPS and perf report) for just the coroutine
> bypass (but not the other changes in this patch series)?
>
> Coroutine: 101k IOPS
> Bypass: ? IOPS

Please see blow link, sorry for missing the data for bypass co which
was collected at the same condition:

http://pastebin.com/JqrpF87G

Thanks,



reply via email to

[Prev in Thread] Current Thread [Next in Thread]