qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH for-2.11] block: Keep strong reference when drai


From: Fam Zheng
Subject: Re: [Qemu-devel] [PATCH for-2.11] block: Keep strong reference when draining all BDS
Date: Fri, 10 Nov 2017 21:32:07 +0800
User-agent: Mutt/1.9.1 (2017-09-22)

On Fri, 11/10 14:17, Kevin Wolf wrote:
> Am 10.11.2017 um 03:45 hat Fam Zheng geschrieben:
> > On Thu, 11/09 21:43, Max Reitz wrote:
> > > Draining a BDS may lead to graph modifications, which in turn may result
> > > in it and other BDS being stripped of their current references.  If
> > > bdrv_drain_all_begin() and bdrv_drain_all_end() do not keep strong
> > > references themselves, the BDS they are trying to drain (or undrain) may
> > > disappear right under their feet -- or, more specifically, under the
> > > feet of BDRV_POLL_WHILE() in bdrv_drain_recurse().
> > > 
> > > This fixes an occasional hang of iotest 194.
> > > 
> > > Signed-off-by: Max Reitz <address@hidden>
> > > ---
> > >  block/io.c | 47 ++++++++++++++++++++++++++++++++++++++++++++---
> > >  1 file changed, 44 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/block/io.c b/block/io.c
> > > index 3d5ef2cabe..a0a2833e8e 100644
> > > --- a/block/io.c
> > > +++ b/block/io.c
> > > @@ -340,7 +340,10 @@ void bdrv_drain_all_begin(void)
> > >      bool waited = true;
> > >      BlockDriverState *bs;
> > >      BdrvNextIterator it;
> > > -    GSList *aio_ctxs = NULL, *ctx;
> > > +    GSList *aio_ctxs = NULL, *ctx, *bs_list = NULL, *bs_list_entry;
> > > +
> > > +    /* Must be called from the main loop */
> > > +    assert(qemu_get_current_aio_context() == qemu_get_aio_context());
> > >  
> > >      block_job_pause_all();
> > >  
> > > @@ -355,6 +358,12 @@ void bdrv_drain_all_begin(void)
> > >          if (!g_slist_find(aio_ctxs, aio_context)) {
> > >              aio_ctxs = g_slist_prepend(aio_ctxs, aio_context);
> > >          }
> > > +
> > > +        /* Keep a strong reference to all root BDS and copy them into
> > > +         * an own list because draining them may lead to graph
> > > +         * modifications. */
> > > +        bdrv_ref(bs);
> > > +        bs_list = g_slist_prepend(bs_list, bs);
> > >      }
> > >  
> > >      /* Note that completion of an asynchronous I/O operation can trigger 
> > > any
> > > @@ -370,7 +379,11 @@ void bdrv_drain_all_begin(void)
> > >              AioContext *aio_context = ctx->data;
> > >  
> > >              aio_context_acquire(aio_context);
> > > -            for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
> > > +            for (bs_list_entry = bs_list; bs_list_entry;
> > > +                 bs_list_entry = bs_list_entry->next)
> > > +            {
> > > +                bs = bs_list_entry->data;
> > > +
> > >                  if (aio_context == bdrv_get_aio_context(bs)) {
> > >                      waited |= bdrv_drain_recurse(bs, true);
> > >                  }
> > > @@ -379,24 +392,52 @@ void bdrv_drain_all_begin(void)
> > >          }
> > >      }
> > >  
> > > +    for (bs_list_entry = bs_list; bs_list_entry;
> > > +         bs_list_entry = bs_list_entry->next)
> > > +    {
> > > +        bdrv_unref(bs_list_entry->data);
> > > +    }
> > > +
> > >      g_slist_free(aio_ctxs);
> > > +    g_slist_free(bs_list);
> > >  }
> > >  
> > >  void bdrv_drain_all_end(void)
> > >  {
> > >      BlockDriverState *bs;
> > >      BdrvNextIterator it;
> > > +    GSList *bs_list = NULL, *bs_list_entry;
> > > +
> > > +    /* Must be called from the main loop */
> > > +    assert(qemu_get_current_aio_context() == qemu_get_aio_context());
> > >  
> > > +    /* Keep a strong reference to all root BDS and copy them into an
> > > +     * own list because draining them may lead to graph modifications.
> > > +     */
> > >      for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
> > > -        AioContext *aio_context = bdrv_get_aio_context(bs);
> > > +        bdrv_ref(bs);
> > > +        bs_list = g_slist_prepend(bs_list, bs);
> > > +    }
> > > +
> > > +    for (bs_list_entry = bs_list; bs_list_entry;
> > > +         bs_list_entry = bs_list_entry->next)
> > > +    {
> > > +        AioContext *aio_context;
> > > +
> > > +        bs = bs_list_entry->data;
> > > +        aio_context = bdrv_get_aio_context(bs);
> > >  
> > >          aio_context_acquire(aio_context);
> > >          aio_enable_external(aio_context);
> > >          bdrv_parent_drained_end(bs);
> > >          bdrv_drain_recurse(bs, false);
> > >          aio_context_release(aio_context);
> > > +
> > > +        bdrv_unref(bs);
> > >      }
> > >  
> > > +    g_slist_free(bs_list);
> > > +
> > >      block_job_resume_all();
> > >  }
> > 
> > It is better to put the references into BdrvNextIterator and introduce
> > bdrv_next_iterator_destroy() to free them? You'll need to touch all callers
> > because it is not C++, but it secures all of rest, which seems vulnerable 
> > in the
> > same pattern, for example the aio_poll() in iothread_stop_all().
> 
> You could automatically free the references when bdrv_next() returns
> NULL. Then you need an explicit bdrv_next_iterator_destroy() only for
> callers that stop iterating halfway through the list.

Yes, good idea.

> 
> Do you actually need to keep references to all BDSes in the whole list
> while using the iterator or would it be enough to just keep a reference
> to the current one?

To fix the bug we now see I think keeping the current is enough, but I think
implementing just like this patch is also good with some future-proofing: we
cannot know what will be wedged into the nexted aio_poll()'s over time (and yes,
we should really reduce the number of them.)

Fam



reply via email to

[Prev in Thread] Current Thread [Next in Thread]