qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 2/2] virtio-blk: Use bdrv_aio_multiwrite


From: Christoph Hellwig
Subject: Re: [Qemu-devel] [PATCH v3 2/2] virtio-blk: Use bdrv_aio_multiwrite
Date: Fri, 11 Sep 2009 20:39:25 +0200
User-agent: Mutt/1.3.28i

On Fri, Sep 11, 2009 at 09:10:20AM +0200, Kevin Wolf wrote:
> >> +    blkreq[*num_writes].sector = req->out->sector;
> >> +    blkreq[*num_writes].nb_sectors = req->qiov.size / 512;
> >> +    blkreq[*num_writes].qiov = &req->qiov;
> >> +    blkreq[*num_writes].cb = virtio_blk_rw_complete;
> >> +    blkreq[*num_writes].opaque = req;
> >> +    blkreq[*num_writes].error = 0;
> >> +
> >> +    (*num_writes)++;
> > 
> > If you pass the completion routine to the function and map the error case
> > to calling completion routine (which is the usual way to handle errors
> > anyway) this function could become copletely generic.
> 
> Except that VirtIOBlockReq doesn't seem to be a type commonly used in
> generic code.

Yeah, we'd need to pass it only as opaque cookie and the qiov/setor
separately, making the whole thing look more similar to how the block
API works elsewhere.

> > Any chance to just use this batches subsmission unconditionally and
> > also for reads?  I'd hate to grow even more confusing I/O methods
> > in the block later.
> 
> If we want to completely obsolete bdrv_aio_readv/writev by batch
> submission functions (not only in block.c but also in each block
> driver), we certainly can do that. I think this would make a lot of
> sense, but it's quite some work and definitely out of scope for this
> patch which is basically meant to be a qcow2 performance fix.

I'm generally not a big fan of incomplete transitions, history tells
they will remaing incomplete for a long time or even forever and grow
more and more of the old calls.  The persistant existance of the non-AIO
block APIs in qemu is one of those cases..





reply via email to

[Prev in Thread] Current Thread [Next in Thread]