qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH for-2.11 3/4] coroutine: Cancel aio_co_schedule(


From: Kevin Wolf
Subject: Re: [Qemu-block] [PATCH for-2.11 3/4] coroutine: Cancel aio_co_schedule() on direct entry
Date: Tue, 28 Nov 2017 18:03:31 +0100
User-agent: Mutt/1.9.1 (2017-09-22)

Am 28.11.2017 um 17:42 hat Jeff Cody geschrieben:
> On Tue, Nov 28, 2017 at 05:28:50PM +0100, Kevin Wolf wrote:
> > Am 28.11.2017 um 17:14 hat Paolo Bonzini geschrieben:
> > > On 28/11/2017 16:43, Kevin Wolf wrote:
> > > > +    /* Make sure that a coroutine that can alternatively reentered 
> > > > from two
> > > > +     * different sources isn't reentered more than once when the first 
> > > > caller
> > > > +     * uses aio_co_schedule() and the other one enters to coroutine 
> > > > directly.
> > > > +     * This is achieved by cancelling the pending aio_co_schedule().
> > > > +     *
> > > > +     * The other way round, if aio_co_schedule() would be called after 
> > > > this
> > > > +     * point, this would be a problem, too, but in practice it doesn't 
> > > > happen
> > > > +     * because we're holding the AioContext lock here and 
> > > > aio_co_schedule()
> > > > +     * callers must do the same.
> > > 
> > > No, this is not true.  aio_co_schedule is thread-safe.
> > 
> > Hm... With the reproducer we were specfically looking at
> > qmp_block_job_cancel(), which does take the AioContext locks. But it
> > might not be as universal as I thought.
> > 
> > To be honest, I just wasn't sure what to do with this case anyway. It
> > means that the coroutine is already running when someone else schedules
> > it. We don't really know whether we have to enter it a second time or
> > not.
> > 
> > So if it can indeed happen in practice, we need to think a bit more
> > about this.
> 
> It would be nice if, on coroutine termination, we could unschedule all
> pending executions for that coroutine.  I think use-after-free is the main
> concern for someone else calling aio_co_schedule() while the coroutine is
> currently running.

I'd rather assert that this never happens.

Because if you have the same situation, but the coroutine doesn't
terminate, but just yields for another reason, then reentering it with
the assumption that it's still in the previous yield is a bad bug that
you definitely want to catch.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]