qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 00/13] aio: drop io_flush()


From: Kevin Wolf
Subject: Re: [Qemu-devel] [RFC 00/13] aio: drop io_flush()
Date: Fri, 12 Apr 2013 12:04:01 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Am 12.04.2013 um 11:49 hat Stefan Hajnoczi geschrieben:
> On Fri, Apr 12, 2013 at 10:02:02AM +0200, Kevin Wolf wrote:
> > Am 11.04.2013 um 17:44 hat Stefan Hajnoczi geschrieben:
> > > Here's my entry to the "let's get rid of io_flush()" effort.  It's based 
> > > on
> > > Paolo's insight about bdrv_drain_all() that the block layer already has a
> > > tracked_requests list.  io_flush() is redundant since the block layer 
> > > already
> > > knows if requests are pending.
> > 
> > Except when there are requests that don't come from the guest, but are
> > issued internally. In this case, block.c doesn't know about them, but
> > only the block driver does, so we need a .bdrv_drain callback to tell
> > the block layer about these.
> > 
> > The one specific case that comes to mind is the QED timer for resetting
> > the dirty bit. I think you need to have the .bdrv_drain callback before
> > you can start ignoring .io_flush.
> 
> I think .bdrv_drain() might come in handy in the future if we need it.
> 
> Regarding the QED timer, it does not hook into bdrv_drain_all() today.
> The semantics are unchanged with this patch series applied.
> 
> The timer runs while the guest is running (vm_clock).  It doesn't fire
> during synchronous I/O since qemu_aio_wait() does not dispatch timers.

Okay, I think I need to be more precise. The scenario I have in mind is
the following one:

1. The timer triggers
2. QED sends bdrv_aio_flush() and returns
2b. The flush callback is called and we progress to qed_write_header
    (optional, but maybe having a write instead of flush makes the
    problem more visible) and return again
3. Someone calls bdrv_drain_all()

Before removing .io_flush:

4. bdrv_drain_all() just waits until the header update is complete
   because the raw-posix driver will return io_flush = 1 as long as the
   update is in flight. It will return only then.

After removing .io_flush:

4. bdrv_drain_all() sees that the QED block device doesn't have any
   requests in flight. The raw-posix driver isn't even queried because
   it's an anonymous BDS. bdrv_drain_all() returns, but the request is
   still in flight. Oops.

This specific case could be fixed as well by querying all BDSes instead
of only the named ones, but in general it's not necessary that a block
driver calls into a lower-level BDS for its background operations that
must be stopped.

Or actually, if you want to avoid .bdrv_drain for now, the patch that I
started when I thought a bit about this, had a default .bdrv_drain
implementation that just forwarded the request to bs->file if it wasn't
implemented by a block driver. For the QED case, this would work.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]