qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 4/4] block: avoid creating oversized writes in m


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH 4/4] block: avoid creating oversized writes in multiwrite_merge
Date: Tue, 30 Sep 2014 10:03:37 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Am 30.09.2014 um 09:26 hat Peter Lieven geschrieben:
> On 23.09.2014 12:05, Kevin Wolf wrote:
> >Am 23.09.2014 um 11:52 hat Peter Lieven geschrieben:
> >>On 23.09.2014 11:47, Kevin Wolf wrote:
> >>>Am 23.09.2014 um 11:32 hat Peter Lieven geschrieben:
> >>>>On 23.09.2014 10:59, Kevin Wolf wrote:
> >>>>>Am 23.09.2014 um 08:15 hat Peter Lieven geschrieben:
> >>>>>>On 22.09.2014 21:06, Paolo Bonzini wrote:
> >>>>>>>Il 22/09/2014 11:43, Peter Lieven ha scritto:
> >>>>>>>>This series aims not at touching default behaviour. The default for 
> >>>>>>>>max_transfer_length
> >>>>>>>>is 0 (no limit). max_transfer_length is a limit that MUST be 
> >>>>>>>>satisfied otherwise the request
> >>>>>>>>will fail. And Patch 2 aims at catching this fail earlier in the 
> >>>>>>>>stack.
> >>>>>>>Understood.  But the right fix is to avoid that backend limits 
> >>>>>>>transpire
> >>>>>>>into guest ABI, not to catch the limits earlier.  So the right fix 
> >>>>>>>would
> >>>>>>>be to implement request splitting.
> >>>>>>Since you proposed to add traces for this would you leave those in?
> >>>>>>And since iSCSI is the only user of this at the moment would you
> >>>>>>go for implementing this check in the iSCSI block layer?
> >>>>>>
> >>>>>>As for the split logic would you think it is enough to build new qiov's
> >>>>>>out of the too big iov without copying the contents? This would work
> >>>>>>as long as a single iov inside the qiov is not bigger the 
> >>>>>>max_transfer_length.
> >>>>>>Otherwise I would need to allocate temporary buffers and copy around.
> >>>>>You can split single iovs, too. There are functions that make this very
> >>>>>easy, they copy a sub-qiov with a byte granularity offset and length
> >>>>>(qemu_iovec_concat and friends). qcow2 uses the same to split requests
> >>>>>at (fragmented) cluster boundaries.
> >>>>Might it be as easy as this?
> >>>This is completely untested, right? :-)
> >>Yes :-)
> >>I was just unsure if the general approach is right.
> >Looks alright to me.
> >
> >>>But ignoring bugs, the principle looks right.
> >>>
> >>>>static int coroutine_fn bdrv_co_do_readv(BlockDriverState *bs,
> >>>>     int64_t sector_num, int nb_sectors, QEMUIOVector *qiov,
> >>>>     BdrvRequestFlags flags)
> >>>>{
> >>>>     if (nb_sectors < 0 || nb_sectors > (UINT_MAX >> BDRV_SECTOR_BITS)) {
> >>>>         return -EINVAL;
> >>>>     }
> >>>>
> >>>>     if (bs->bl.max_transfer_length &&
> >>>>         nb_sectors > bs->bl.max_transfer_length) {
> >>>>         int ret = 0;
> >>>>         QEMUIOVector *qiov2 = NULL;
> >>>Make it "QEMUIOVector qiov2;" on the stack.
> >>>
> >>>>         size_t soffset = 0;
> >>>>
> >>>>         trace_bdrv_co_do_readv_toobig(bs, sector_num, nb_sectors,
> >>>>bs->bl.max_transfer_length);
> >>>>
> >>>>         qemu_iovec_init(qiov2, qiov->niov);
> >>>And &qiov2 here, then this doesn't crash with a NULL dereference.
> >>>
> >>>>         while (nb_sectors > bs->bl.max_transfer_length && !ret) {
> >>>>             qemu_iovec_reset(qiov2);
> >>>>             qemu_iovec_concat(qiov2, qiov, soffset,
> >>>>                               bs->bl.max_transfer_length << 
> >>>> BDRV_SECTOR_BITS);
> >>>>             ret = bdrv_co_do_preadv(bs, sector_num << BDRV_SECTOR_BITS,
> >>>>                                     bs->bl.max_transfer_length << 
> >>>> BDRV_SECTOR_BITS,
> >>>>                                     qiov2, flags);
> >>>>             soffset += bs->bl.max_transfer_length << BDRV_SECTOR_BITS;
> >>>>             sector_num += bs->bl.max_transfer_length;
> >>>>             nb_sectors -= bs->bl.max_transfer_length;
> >>>>         }
> >>>>         qemu_iovec_destroy(qiov2);
> >>>>         if (ret) {
> >>>>             return ret;
> >>>>         }
> >>>The error check needs to be immediately after the assignment of ret,
> >>>otherwise the next loop iteration can overwrite an error with a success
> >>>(and if it didn't, it would still do useless I/O because the request as
> >>>a whole would fail anyway).
> >>There is a && !ret in the loop condition. I wanted to avoid copying the 
> >>destroy part.
> >Ah, yes, clever. I missed that. Maybe too clever then. ;-)
> >
> >>BTW, is it !ret or ret < 0 ?
> >It only ever returns 0 or negative, so both are equivalent. I
> >prefer checks for ret < 0, but that's a matter of style rather than
> >correctness.
> >
> >Another problem I just noticed is that this is not the only caller of
> >bdrv_co_do_preadv(), so you're not splitting all requests. The
> >synchronous bdrv_read/write/pread/pwrite/pwritev functions all don't get
> >the functionality this way.
> >
> >Perhaps you should be doing it inside bdrv_co_do_preadv(), before the
> >call to bdrv_aligned_preadv(). Might even be more correct if it can
> >happen that the alignment adjustment increases a request too much to fit
> >in bl.max_transfer_length.
> 
> If I do it this way can I use the same req Object for all splitted
> requests?

That's a good question. I think as long as you process the parts of the
split request one after another, reusing the same req object should be
safe. If you were to process them in parallel, though, I wouldn't be as
sure about it (which you probably don't want because it complicates
things :-)).

The probably most obviously correct way to handle things would be to
have one tracked_request_begin/end() for the whole request and then call
bdrv_aligned_preadv() multiple times in between. Otherwise you'd have to
serialise each part individually etc.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]