qemu-stable
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-stable] [Qemu-devel] [PATCH] iothread: Stop threads before mai


From: Fam Zheng
Subject: Re: [Qemu-stable] [Qemu-devel] [PATCH] iothread: Stop threads before main() quits
Date: Fri, 9 Sep 2016 10:57:36 +0800
User-agent: Mutt/1.7.0 (2016-08-17)

On Thu, 09/08 11:34, Paolo Bonzini wrote:
> 
> 
> On 08/09/2016 11:28, Fam Zheng wrote:
> > Right after main_loop ends, we release various things but keep iothread
> > alive. The latter is not prepared to the sudden change of resources.
> > 
> > Specifically, after bdrv_close_all(), virtio-scsi dataplane get a
> > surprise at the empty BlockBackend:
> > 
> > (gdb) bt
> >     at /usr/src/debug/qemu-2.6.0/hw/scsi/virtio-scsi.c:543
> >     at /usr/src/debug/qemu-2.6.0/hw/scsi/virtio-scsi.c:577
> > 
> > It is because the d->conf.blk->root is set to NULL, then
> > blk_get_aio_context() returns qemu_aio_context, whereas s->ctx is still
> > pointing to the iothread:
> > 
> >     hw/scsi/virtio-scsi.c:543:
> > 
> >     if (s->dataplane_started) {
> >         assert(blk_get_aio_context(d->conf.blk) == s->ctx);
> >     }
> > 
> > To fix this, let's stop iothreads before doing bdrv_close_all().
> > 
> > Cc: address@hidden
> > Signed-off-by: Fam Zheng <address@hidden>
> > ---
> >  include/sysemu/iothread.h |  1 +
> >  iothread.c                | 24 ++++++++++++++++++++----
> >  vl.c                      |  2 ++
> >  3 files changed, 23 insertions(+), 4 deletions(-)
> > 
> > diff --git a/include/sysemu/iothread.h b/include/sysemu/iothread.h
> > index 2eefea1..68ac2de 100644
> > --- a/include/sysemu/iothread.h
> > +++ b/include/sysemu/iothread.h
> > @@ -35,5 +35,6 @@ typedef struct {
> >  
> >  char *iothread_get_id(IOThread *iothread);
> >  AioContext *iothread_get_aio_context(IOThread *iothread);
> > +void iothread_stop_all(void);
> >  
> >  #endif /* IOTHREAD_H */
> > diff --git a/iothread.c b/iothread.c
> > index f183d38..fb08a60 100644
> > --- a/iothread.c
> > +++ b/iothread.c
> > @@ -54,16 +54,25 @@ static void *iothread_run(void *opaque)
> >      return NULL;
> >  }
> >  
> > -static void iothread_instance_finalize(Object *obj)
> > +static int iothread_stop(Object *object, void *opaque)
> >  {
> > -    IOThread *iothread = IOTHREAD(obj);
> > +    IOThread *iothread;
> >  
> > -    if (!iothread->ctx) {
> > -        return;
> > +    iothread = (IOThread *)object_dynamic_cast(object, TYPE_IOTHREAD);
> > +    if (!iothread || !iothread->ctx) {
> > +        return 0;
> >      }
> >      iothread->stopping = true;
> >      aio_notify(iothread->ctx);
> >      qemu_thread_join(&iothread->thread);
> > +    return 0;
> > +}
> > +
> > +static void iothread_instance_finalize(Object *obj)
> > +{
> > +    IOThread *iothread = IOTHREAD(obj);
> > +
> > +    iothread_stop(obj, NULL);
> >      qemu_cond_destroy(&iothread->init_done_cond);
> >      qemu_mutex_destroy(&iothread->init_done_lock);
> >      aio_context_unref(iothread->ctx);
> > @@ -174,3 +183,10 @@ IOThreadInfoList *qmp_query_iothreads(Error **errp)
> >      object_child_foreach(container, query_one_iothread, &prev);
> >      return head;
> >  }
> > +
> > +void iothread_stop_all(void)
> > +{
> > +    Object *container = object_get_objects_root();
> > +
> > +    object_child_foreach(container, iothread_stop, NULL);
> > +}
> > diff --git a/vl.c b/vl.c
> > index ee557a1..6a218ce 100644
> > --- a/vl.c
> > +++ b/vl.c
> > @@ -121,6 +121,7 @@ int main(int argc, char **argv)
> >  #include "crypto/init.h"
> >  #include "sysemu/replay.h"
> >  #include "qapi/qmp/qerror.h"
> > +#include "sysemu/iothread.h"
> >  
> >  #define MAX_VIRTIO_CONSOLES 1
> >  #define MAX_SCLP_CONSOLES 1
> > @@ -4616,6 +4617,7 @@ int main(int argc, char **argv, char **envp)
> >      trace_init_vcpu_events();
> >      main_loop();
> >      replay_disable_events();
> > +    iothread_stop_all();
> >  
> >      bdrv_close_all();
> >      pause_all_vcpus();
> > 
> 
> Reviewed-by: Paolo Bonzini <address@hidden>

Thanks! Is there a coming PULL from you that this patch could sneak in? :)

Fam



reply via email to

[Prev in Thread] Current Thread [Next in Thread]