qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH qemu v16 17/19] spapr_iommu, vfio, memory: Notif


From: Alex Williamson
Subject: Re: [Qemu-devel] [PATCH qemu v16 17/19] spapr_iommu, vfio, memory: Notify IOMMU about starting/stopping being used by VFIO
Date: Mon, 16 May 2016 14:13:33 -0600

On Mon, 16 May 2016 18:35:14 +1000
Alexey Kardashevskiy <address@hidden> wrote:

> On 05/14/2016 08:26 AM, Alex Williamson wrote:
> > On Wed,  4 May 2016 16:52:29 +1000
> > Alexey Kardashevskiy <address@hidden> wrote:
> >  
> >> The sPAPR TCE tables manage 2 copies when VFIO is using an IOMMU -
> >> a guest view of the table and a hardware TCE table. If there is no VFIO
> >> presense in the address space, then just the guest view is used, if
> >> this is the case, it is allocated in the KVM. However since there is no
> >> support yet for VFIO in KVM TCE hypercalls, when we start using VFIO,
> >> we need to move the guest view from KVM to the userspace; and we need
> >> to do this for every IOMMU on a bus with VFIO devices.
> >>
> >> This adds vfio_start/vfio_stop callbacks in MemoryRegionIOMMUOps to
> >> notify IOMMU about changing environment so it can reallocate the table
> >> to/from KVM or (when available) hook the IOMMU groups with the logical
> >> bus (LIOBN) in the KVM.
> >>
> >> This removes explicit spapr_tce_set_need_vfio() call from PCI hotplug
> >> path as the new callbacks do this better - they notify IOMMU at
> >> the exact moment when the configuration is changed, and this also
> >> includes the case of PCI hot unplug.
> >>
> >> This postpones vfio_stop() till the end of region_del() as
> >> vfio_dma_unmap() has to execute before VFIO support is disabled.
> >>
> >> As there can be multiple containers attached to the same PHB/LIOBN,
> >> this adds a wrapper with a use counter for every IOMMU MR and
> >> stores them in a list in the VFIOAddressSpace.
> >>
> >> Signed-off-by: Alexey Kardashevskiy <address@hidden>
> >> ---
> >> Changes:
> >> v16:
> >> * added a use counter in VFIOAddressSpace->VFIOIOMMUMR
> >>
> >> v15:
> >> * s/need_vfio/vfio-Users/g
> >> ---
> >>  hw/ppc/spapr_iommu.c          | 12 ++++++++++++
> >>  hw/ppc/spapr_pci.c            |  6 ------
> >>  hw/vfio/common.c              | 45 
> >> ++++++++++++++++++++++++++++++++++++++++++-
> >>  include/exec/memory.h         |  4 ++++
> >>  include/hw/vfio/vfio-common.h |  7 +++++++
> >>  5 files changed, 67 insertions(+), 7 deletions(-)
> >>
> >> diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
> >> index c945dba..7af2700 100644
> >> --- a/hw/ppc/spapr_iommu.c
> >> +++ b/hw/ppc/spapr_iommu.c
> >> @@ -155,6 +155,16 @@ static uint64_t spapr_tce_get_page_sizes(MemoryRegion 
> >> *iommu)
> >>      return 1ULL << tcet->page_shift;
> >>  }
> >>
> >> +static void spapr_tce_vfio_start(MemoryRegion *iommu)
> >> +{
> >> +    spapr_tce_set_need_vfio(container_of(iommu, sPAPRTCETable, iommu), 
> >> true);
> >> +}
> >> +
> >> +static void spapr_tce_vfio_stop(MemoryRegion *iommu)
> >> +{
> >> +    spapr_tce_set_need_vfio(container_of(iommu, sPAPRTCETable, iommu), 
> >> false);
> >> +}
> >> +
> >>  static void spapr_tce_table_do_enable(sPAPRTCETable *tcet);
> >>  static void spapr_tce_table_do_disable(sPAPRTCETable *tcet);
> >>
> >> @@ -239,6 +249,8 @@ static const VMStateDescription 
> >> vmstate_spapr_tce_table = {
> >>  static MemoryRegionIOMMUOps spapr_iommu_ops = {
> >>      .translate = spapr_tce_translate_iommu,
> >>      .get_page_sizes = spapr_tce_get_page_sizes,
> >> +    .vfio_start = spapr_tce_vfio_start,
> >> +    .vfio_stop = spapr_tce_vfio_stop,
> >>  };
> >>
> >>  static int spapr_tce_table_realize(DeviceState *dev)
> >> diff --git a/hw/ppc/spapr_pci.c b/hw/ppc/spapr_pci.c
> >> index 5b9ccff..51e7d56 100644
> >> --- a/hw/ppc/spapr_pci.c
> >> +++ b/hw/ppc/spapr_pci.c
> >> @@ -1086,12 +1086,6 @@ static void 
> >> spapr_phb_add_pci_device(sPAPRDRConnector *drc,
> >>      void *fdt = NULL;
> >>      int fdt_start_offset = 0, fdt_size;
> >>
> >> -    if (object_dynamic_cast(OBJECT(pdev), "vfio-pci")) {
> >> -        sPAPRTCETable *tcet = spapr_tce_find_by_liobn(phb->dma_liobn);
> >> -
> >> -        spapr_tce_set_need_vfio(tcet, true);
> >> -    }
> >> -
> >>      if (dev->hotplugged) {
> >>          fdt = create_device_tree(&fdt_size);
> >>          fdt_start_offset = spapr_create_pci_child_dt(phb, pdev, fdt, 0);
> >> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> >> index 3f2fb23..03daf88 100644
> >> --- a/hw/vfio/common.c
> >> +++ b/hw/vfio/common.c
> >> @@ -421,6 +421,26 @@ static void vfio_listener_region_add(MemoryListener 
> >> *listener,
> >>          QLIST_INSERT_HEAD(&container->giommu_list, giommu, giommu_next);
> >>
> >>          memory_region_register_iommu_notifier(giommu->iommu, &giommu->n);
> >> +
> >> +        if (section->mr->iommu_ops && section->mr->iommu_ops->vfio_start) 
> >> {
> >> +            VFIOIOMMUMR *iommumr;
> >> +            bool found = false;
> >> +
> >> +            QLIST_FOREACH(iommumr, &container->space->iommumrs, 
> >> iommumr_next) {
> >> +                if (iommumr->iommu == section->mr) {
> >> +                    found = true;
> >> +                    break;
> >> +                }
> >> +            }
> >> +            if (!found) {
> >> +                iommumr = g_malloc0(sizeof(*iommumr));
> >> +                iommumr->iommu = section->mr;
> >> +                section->mr->iommu_ops->vfio_start(section->mr);
> >> +                QLIST_INSERT_HEAD(&container->space->iommumrs, iommumr,
> >> +                                  iommumr_next);
> >> +            }
> >> +            ++iommumr->users;
> >> +        }
> >>          memory_region_iommu_replay(giommu->iommu, &giommu->n,
> >>                                     false);
> >>
> >> @@ -470,6 +490,7 @@ static void vfio_listener_region_del(MemoryListener 
> >> *listener,
> >>      hwaddr iova, end;
> >>      Int128 llend, llsize;
> >>      int ret;
> >> +    MemoryRegion *iommu = NULL;
> >>
> >>      if (vfio_listener_skipped_section(section)) {
> >>          trace_vfio_listener_region_del_skip(
> >> @@ -490,13 +511,30 @@ static void vfio_listener_region_del(MemoryListener 
> >> *listener,
> >>
> >>          QLIST_FOREACH(giommu, &container->giommu_list, giommu_next) {
> >>              if (giommu->iommu == section->mr) {
> >> +                VFIOIOMMUMR *iommumr;
> >> +
> >>                  memory_region_unregister_iommu_notifier(&giommu->n);
> >> +
> >> +                QLIST_FOREACH(iommumr, &container->space->iommumrs,
> >> +                              iommumr_next) {
> >> +                    if (iommumr->iommu != section->mr) {
> >> +                        continue;
> >> +                    }
> >> +                    --iommumr->users;
> >> +                    if (iommumr->users) {
> >> +                        break;
> >> +                    }
> >> +                    QLIST_REMOVE(iommumr, iommumr_next);
> >> +                    g_free(iommumr);
> >> +                    iommu = giommu->iommu;
> >> +                    break;
> >> +                }
> >> +
> >>                  QLIST_REMOVE(giommu, giommu_next);
> >>                  g_free(giommu);
> >>                  break;
> >>              }
> >>          }
> >> -
> >>          /*
> >>           * FIXME: We assume the one big unmap below is adequate to
> >>           * remove any individual page mappings in the IOMMU which
> >> @@ -527,6 +565,10 @@ static void vfio_listener_region_del(MemoryListener 
> >> *listener,
> >>                       "0x%"HWADDR_PRIx") = %d (%m)",
> >>                       container, iova, int128_get64(llsize), ret);
> >>      }
> >> +
> >> +    if (iommu && iommu->iommu_ops && iommu->iommu_ops->vfio_stop) {
> >> +        iommu->iommu_ops->vfio_stop(section->mr);
> >> +    }
> >>  }
> >>
> >>  static const MemoryListener vfio_memory_listener = {
> >> @@ -787,6 +829,7 @@ static VFIOAddressSpace 
> >> *vfio_get_address_space(AddressSpace *as)
> >>      space = g_malloc0(sizeof(*space));
> >>      space->as = as;
> >>      QLIST_INIT(&space->containers);
> >> +    QLIST_INIT(&space->iommumrs);
> >>
> >>      QLIST_INSERT_HEAD(&vfio_address_spaces, space, list);
> >>
> >> diff --git a/include/exec/memory.h b/include/exec/memory.h
> >> index a3a1703..52d2c70 100644
> >> --- a/include/exec/memory.h
> >> +++ b/include/exec/memory.h
> >> @@ -151,6 +151,10 @@ struct MemoryRegionIOMMUOps {
> >>      IOMMUTLBEntry (*translate)(MemoryRegion *iommu, hwaddr addr, bool 
> >> is_write);
> >>      /* Returns supported page sizes */
> >>      uint64_t (*get_page_sizes)(MemoryRegion *iommu);
> >> +    /* Called when VFIO starts using this */
> >> +    void (*vfio_start)(MemoryRegion *iommu);
> >> +    /* Called when VFIO stops using this */
> >> +    void (*vfio_stop)(MemoryRegion *iommu);  
> >
> > Really?  Just no.  Generic MemoryRegionIOMMUOps should have no
> > visibility of vfio and certainly not vfio specific callbacks.  I don't
> > really understand what guest view versus KVM view is doing here, but
> > it's clearly something to do with visibility versus acceleration of the
> > IOMMU tables and the callbacks, if they're even generic at all, should
> > be reflecting that, not some vague relation to vfio.  
> 
> 
> Is it 'no' to just having these specific callbacks in the 
> MemoryRegionIOMMUOps struct or 'no' to the idea of notifying the MR about 
> VFIO started using it?
> 
> If the former, I could inherit a QOM object from IOMMU MR, add an QOM 
> interface and call it from VFIO. Manual counting vfio-pci devices on a 
> hotplug-enabled PHB is painful :-/

So is this interface.  The idea of putting vfio specific callbacks in a
common structure is just a non-starter.  I think you're trying to
manage the visibility of the iommu to QEMU, so wouldn't it be logical
to tie this into the iommu notifier?

You already have memory_region_register_iommu_notifier() and 
memory_region_unregister_iommu_notifier() as generic entry points where
a caller registering an iommu notifier clearly cares about a QEMU-based
iommu table and de-registration clearly indicates an end of that use.
Could you not make MemoryRegionIOMMUOps callbacks around those events,
rather than tainting it with vfio specific callbacks?  If you don't want
to keep track of usage count yourself, perhaps the callbacks would only
be used when the first entry is added to the notifier list QLIST and
when the last entry is removed.  This would allow vfio/common to know
nothing about this and we wouldn't need to invoke a poorly defined ops
callback interface or duplicate reference counts on platforms that
don't have this issue.


> >>  };
> >>
> >>  typedef struct CoalescedMemoryRange CoalescedMemoryRange;
> >> diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
> >> index 808263b..a9e6e33 100644
> >> --- a/include/hw/vfio/vfio-common.h
> >> +++ b/include/hw/vfio/vfio-common.h
> >> @@ -64,9 +64,16 @@ typedef struct VFIORegion {
> >>  typedef struct VFIOAddressSpace {
> >>      AddressSpace *as;
> >>      QLIST_HEAD(, VFIOContainer) containers;
> >> +    QLIST_HEAD(, VFIOIOMMUMR) iommumrs;
> >>      QLIST_ENTRY(VFIOAddressSpace) list;
> >>  } VFIOAddressSpace;
> >>
> >> +typedef struct VFIOIOMMUMR {
> >> +    MemoryRegion *iommu;
> >> +    int users;
> >> +    QLIST_ENTRY(VFIOIOMMUMR) iommumr_next;
> >> +} VFIOIOMMUMR;
> >> +
> >>  struct VFIOGroup;
> >>
> >>  typedef struct VFIOContainer {  
> >  
> 
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]