qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] vl.c/exit: pause cpus before closing block devi


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH] vl.c/exit: pause cpus before closing block devices
Date: Tue, 8 Aug 2017 13:56:52 +0200
User-agent: Mutt/1.8.3 (2017-05-23)

Am 08.08.2017 um 13:04 hat Paolo Bonzini geschrieben:
> On 08/08/2017 12:02, Kevin Wolf wrote:
> > Am 04.08.2017 um 13:46 hat Paolo Bonzini geschrieben:
> >> On 04/08/2017 11:58, Stefan Hajnoczi wrote:
> >>>> the root cause of this bug is related to this as well:
> >>>> https://lists.gnu.org/archive/html/qemu-devel/2017-07/msg02945.html
> >>>>
> >>>> From commit 99723548 we started assuming (incorrectly?) that blk_
> >>>> functions always WILL have an attached BDS, but this is not always true,
> >>>> for instance, flushing the cache from an empty CDROM.
> >>>>
> >>>> Paolo, can we move the flight counter increment outside of the
> >>>> block-backend layer, is that safe?
> >>> I think the bdrv_inc_in_flight(blk_bs(blk)) needs to be fixed
> >>> regardless of the throttling timer issue discussed below.  BB cannot
> >>> assume that the BDS graph is non-empty.
> >>
> >> Can we make bdrv_aio_* return NULL (even temporarily) if there is no
> >> attached BDS?  That would make it much easier to fix.
> > 
> > Would the proper fix be much more complicated than the following? I must
> > admit that I don't fully understand the current state of affairs with
> > respect to threading, AioContext etc. so I may well be missing
> > something.
> 
> Not much, but it's not complete either.  The issues I see are that: 1)
> blk_drain_all does not take the new counter into account;

Ok, I think this does the trick:

void blk_drain_all(void)
{
    BlockBackend *blk = NULL;

    bdrv_drain_all_begin();
    while ((blk = blk_all_next(blk)) != NULL) {
        blk_drain(blk);
    }
    bdrv_drain_all_end();
}

> 2) bdrv_drain_all callers need to be audited to see if they should be
> blk_drain_all (or more likely, only device BlockBackends should be drained).

qmp_transaction() is unclear to me. It should be changed in some way
anyway because it uses bdrv_drain_all() rather than a begin/end pair.

do_vm_stop() and vm_stop_force_state() probably want blk_drain_all().

xen_invalidate_map_cache() - wtf? Looks like the wrong layer to do this,
but I guess blk_drain_all(), too.

block_migration_cleanup() is just lazy and really means a blk_drain()
for its own BlockBackends. blk_drain_all() as the simple conversion.

migration/savevm: Migration wants blk_drain_all() to get the devices
quiesced.

qemu-io: blk_drain_all(), too.

Hm, looks like there won't be many callers of bdrv_drain_all() left. :-)

> > Note that my blk_drain() implementation doesn't necessarily drain
> > blk_bs(blk) completely, but only those requests that came from the
> > specific BlockBackend. I think this is what the callers want, but
> > if otherwise, it shouldn't be hard to change.
> 
> Yes, this should be what they want.

Apparently not; block jobs don't complete with it any more. I haven't
checked in detail, but it makes sense that they can have a BH (e.g. for
block_job_defer_to_main_loop) without a request being in flight.

So I'm including an unconditional bdrv_drain() again. Or I guess,
calling aio_poll() unconditionally and including its return value
in the loop condition would be the cleaner approach?

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]