qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] vfio: free dynamically-allocated data in instan


From: Alex Williamson
Subject: Re: [Qemu-devel] [PATCH] vfio: free dynamically-allocated data in instance_finalize
Date: Tue, 03 Feb 2015 10:35:07 -0700

On Tue, 2015-02-03 at 10:26 -0700, Alex Williamson wrote:
> On Tue, 2015-02-03 at 17:25 +0100, Paolo Bonzini wrote:
> > 
> > On 03/02/2015 16:20, Alex Williamson wrote:
> > > On Tue, 2015-02-03 at 13:48 +0100, Paolo Bonzini wrote:
> > >> In order to enable out-of-BQL address space lookup, destruction of
> > >> devices needs to be split in two phases.
> > >>
> > >> Unrealize is the first phase; once it complete no new accesses will
> > >> be started, but there may still be pending memory accesses can still
> > >> be completed.
> > >>
> > >> The second part is freeing the device, which only happens once all memory
> > >> accesses are complete.  At this point the reference count has dropped to
> > >> zero, an RCU grace period must have completed (because the RCU-protected
> > >> FlatViews hold a reference to the device via memory_region_ref).  This is
> > >> when instance_finalize is called.
> > >>
> > >> Freeing data belongs in an instance_finalize callback, because the
> > >> dynamically allocated memory can still be used after unrealize by the
> > >> pending memory accesses.
> > >>
> > >> In the case of VFIO, the unrealize callback is too early to munmap the
> > >> BARs.  The munmap must be delayed until memory accesses are complete.
> > >> To do this, split vfio_unmap_bars in two.  The removal step, now called
> > >> vfio_unregister_bars, remains in vfio_exitfn.  The reclamation step
> > >> is vfio_unmap_bars and is moved to the instance_finalize callback.
> > >>
> > >> Similarly, quirk MemoryRegions have to be removed during
> > >> vfio_unregister_bars, but freeing the data structure must be delayed
> > >> to vfio_unmap_bars.
> > >>
> > >> Cc: Alex Williamson <address@hidden>
> > >> Signed-off-by: Paolo Bonzini <address@hidden>
> > >> ---
> > >>  This patch is part of the third installment 3 of the RCU work.
> > >>  Sending it out separately for Alex to review it.
> > >>
> > >>  hw/vfio/pci.c |   78 
> > >> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++---------
> > >>  1 file changed, 68 insertions(+), 10 deletions(-)
> > > 
> > > Looks good to me.  I don't see any external dependencies, so do you want
> > > me to pull this in through my branch?  Thanks,
> > 
> > Yes, please.
> 
> Hmm, except qemu segfaults in whatever sanity test/capabilities probing
> happens when the VM is first opened.  I haven't figured out how to
> capture that instance in gdb yet.

Ah, simply running -device vfio-pci,? causes it:

#0  0x000055555567cb30 in vfio_put_base_device (vbasedev=0x55555636e320)
    at /net/gimli/home/alwillia/Work/qemu.git/hw/vfio/common.c:911
#1  0x00005555556847a2 in vfio_put_device (vdev=0x55555636dab0)
    at /net/gimli/home/alwillia/Work/qemu.git/hw/vfio/pci.c:3120
#2  0x00005555556853aa in vfio_instance_finalize (obj=0x55555636dab0)
    at /net/gimli/home/alwillia/Work/qemu.git/hw/vfio/pci.c:3394
#3  0x00005555558c628b in object_deinit (obj=0x55555636dab0, 
    type=0x555556322aa0) at qom/object.c:399
#4  0x00005555558c62fc in object_finalize (data=0x55555636dab0)
    at qom/object.c:413
#5  0x00005555558c6be9 in object_unref (obj=0x55555636dab0) at qom/object.c:720
#6  0x000055555574b200 in qmp_device_list_properties (
    typename=0x55555635f1b0 "vfio-pci", errp=0x7fffffffdde8) at qmp.c:555
#7  0x000055555571ead4 in qdev_device_help (opts=0x55555635f100)
    at qdev-monitor.c:243
#8  0x0000555555730a1c in device_help_func (opts=0x55555635f100, opaque=0x0)
    at vl.c:2120
#9  0x00005555559c14d0 in qemu_opts_foreach (
    list=0x555555dfd3c0 <qemu_device_opts>, 
    func=0x555555730a00 <device_help_func>, opaque=0x0, abort_on_failure=0)
    at util/qemu-option.c:1057
#10 0x000055555573564e in main (argc=13, argv=0x7fffffffe2b8, 
    envp=0x7fffffffe328) at vl.c:4010





reply via email to

[Prev in Thread] Current Thread [Next in Thread]