qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] qdev: free qemu-opts when the QOM path goes awa


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH] qdev: free qemu-opts when the QOM path goes away
Date: Thu, 5 Nov 2015 13:21:21 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0


On 05/11/2015 13:06, Andreas Färber wrote:
> > 1. Wouldn't it be cleaner to delete dev-opts *before* sending
> >    DEVICE_DELETED?  Like this:
> > 
> >     +++ b/hw/core/qdev.c
> >     @@ -1244,6 +1244,9 @@ static void device_unparent(Object *obj)
> >              dev->parent_bus = NULL;
> >          }
> > 
> >     +    qemu_opts_del(dev->opts);
> >     +    dev->opts = NULL;
> >     +
> >          /* Only send event if the device had been completely realized */
> >          if (dev->pending_deleted_event) {
> >              gchar *path = object_get_canonical_path(OBJECT(dev));
> 
> To me this proposal sounds sane, but I did not get to tracing the code
> flow here. Paolo, which approach do you prefer and why?

It doesn't really matter, because the BQL is being held here.

On the other hand, if the opts are deleted in finalize, there is an
arbitrary delay because finalize is typically called after a
synchronize_rcu period.

>> > 2. If the device is a block device, then unplugging it also deletes its
>> >    backend (ugly wart we keep for backward compatibility; *not* for
>> >    blockdev-add, though).  This backend also has a QemuOpts.  It gets
>> >    deleted in drive_info_del().  Just like device_finalize(), it runs
>> >    within object_unref(), i.e. after DEVICE_DELETED is sent.  Same race,
>> >    different ID, or am I missing something?
>> > 
>> >    See also https://bugzilla.redhat.com/show_bug.cgi?id=1256044
>
> If we can leave this patch decoupled from block layer and decide soonish
> on the desired approach, I'd be happy to include it in my upcoming
> qom-devices pull.

I agree with you, the block layer bug is separate.

Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]