qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v1 00/17] dataplane: optimization and multi virt


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH v1 00/17] dataplane: optimization and multi virtqueue support
Date: Wed, 13 Aug 2014 11:54:00 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Am 12.08.2014 um 21:08 hat Paolo Bonzini geschrieben:
> Il 12/08/2014 10:12, Ming Lei ha scritto:
> >> > The below patch is basically the minimal change to bypass coroutines.  
> >> > Of course
> >> > the block.c part is not acceptable as is (the change to 
> >> > refresh_total_sectors
> >> > is broken, the others are just ugly), but it is a start.  Please run it 
> >> > with
> >> > your fio workloads, or write an aio-based version of a qemu-img/qemu-io 
> >> > *I/O*
> >> > benchmark.
> > Could you explain why the new change is introduced?
> 
> It provides a fast path for bdrv_aio_readv/writev whenever there is
> nothing to do after the driver routine returns.  In this case there is
> no need to wrap the AIOCB returned by the driver routine.
> 
> It doesn't go all the way, and in particular it doesn't reverse
> completely the roles of bdrv_co_readv/writev vs. bdrv_aio_readv/writev.

That's actually why I think it's an option. Remember that, like you say
below, we're optimising for an extreme case here, and I certainly don't
want to hurt the common case for it. I can't imagine a way of reversing
the roles without multiplying the cost for the coroutine path.

Or do you have a clever solution how you'd go about it without having an
impact on the common case?

>  But it is enough to provide something that is not dataplane-specific,
> does not break various functionality that we need to add to dataplane
> virtio-blk, does not mess up the semantics of the block layer, and lets
> you run benchmarks.
> 
> > I will hold it until we can align to the coroutine cost computation,
> > because it is very important for the discussion.
> 
> First of all, note that the coroutine cost is totally pointless in the
> discussion unless you have 100% CPU time and the dataplane thread
> becomes CPU bound.  You haven't said if this is the case.

That's probably the implicit assumption. As I said, it's an extreme
case we're trying to look at. I'm not sure how realistic it is when you
don't work with ramdisks...

> Second, if the coroutine cost is relevant, the profile is really too
> flat to do much about it.  The only solution (and here I *think* I
> disagree slightly with Kevin) is to get rid of it, which is not even too
> hard to do.

I think we just need to make the best use of coroutines. I would really
love to show you numbers, but I'm having a hard time benchmarking all
this stuff. When I test only the block layer with 'qemu-img bench', I
clearly have working optimisations, but it doesn't translate yet into
clear improvments for actual guests. I think other things in the way
from the guest to qemu slow it down so that in the end the coroutine
part doesn't matter much any more.

By the way, I just noticed that sequential reads were significantly
faster (~25%) for me without dataplane than with it. I didn't expect to
gain anything with dataplane on this setup, but certainly not to lose
that much. There might be more to gain there than by optimising or
removing coroutines.

> The problem is that your patches to do touch too much code and subtly
> break too much stuff.  The one I wrote does have a little breakage
> because I don't understand bs->growable 100% and I didn't really put
> much effort into it (my deadline being basically "be done as soon as the
> shower is free"), and it is ugly as hell, _but_ it should be compatible
> with the way the block layer works.

Yes, your patch is definitely much more palatable than Ming's. The part
that I still don't like about it is that it would be stating "in the
common case, we're only doing the second best thing". I'm not yet
convinced that coroutines perform necessarily worse than state-passing
callbacks.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]