qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-ppc] [PATCH 11/11] vfio: Add guest side IOMMU sup


From: David Gibson
Subject: Re: [Qemu-devel] [Qemu-ppc] [PATCH 11/11] vfio: Add guest side IOMMU support
Date: Wed, 15 May 2013 13:32:50 +1000
User-agent: Mutt/1.5.21 (2010-09-15)

On Tue, May 14, 2013 at 08:51:08PM -0600, Alex Williamson wrote:
> On Wed, 2013-05-15 at 11:33 +1000, David Gibson wrote:
> > On Tue, May 14, 2013 at 11:15:26AM -0600, Alex Williamson wrote:
> > > On Tue, 2013-05-14 at 19:13 +1000, David Gibson wrote:
> > > > This patch uses the new IOMMU notifiers to allow VFIO pass through 
> > > > devices
> > > > to work with guest side IOMMUs, as long as the host-side VFIO iommu has
> > > > sufficient capability and granularity to match the guest side. This 
> > > > works
> > > > by tracking all map and unmap operations on the guest IOMMU using the
> > > > notifiers, and mirroring them into VFIO.
> > > > 
> > > > There are a number of FIXMEs, and the scheme involves rather more 
> > > > notifier
> > > > structures than I'd like, but it shuold make for a reasonable proof of
> > > > concept.
> > > > 
> > > > Signed-off-by: David Gibson <address@hidden>
> > > > ---
> > > >  hw/misc/vfio.c |  139 
> > > > ++++++++++++++++++++++++++++++++++++++++++++++++++------
> > > >  1 file changed, 126 insertions(+), 13 deletions(-)
> > > > 
> > > > diff --git a/hw/misc/vfio.c b/hw/misc/vfio.c
> > > > index f4e3792..62a83ca 100644
> > > > --- a/hw/misc/vfio.c
> > > > +++ b/hw/misc/vfio.c
> > > > @@ -133,10 +133,18 @@ typedef struct VFIOContainer {
> > > >          };
> > > >          void (*release)(struct VFIOContainer *);
> > > >      } iommu_data;
> > > > +    QLIST_HEAD(, VFIOGuestIOMMU) guest_iommus;
> > > 
> > > Seems like this would be related to the space, not the container.
> > 
> > So, originally I was going to put it into the space, until I realised
> > that the MemoryListener which sets it up is already per-container.
> > The list still could be per-space, of course, but we'd have to do a
> > bunch of check-if-it's-already there stuff.  And the remove path is
> > worse.
> > 
> > > >      QLIST_HEAD(, VFIOGroup) group_list;
> > > >      QLIST_ENTRY(VFIOContainer) next;
> > > >  } VFIOContainer;
> > > >  
> > > > +typedef struct VFIOGuestIOMMU {
> > > > +    VFIOContainer *container;
> > > > +    MemoryRegion *iommu;
> > > > +    Notifier n;
> > > > +    QLIST_ENTRY(VFIOGuestIOMMU) list;
> > > > +} VFIOGuestIOMMU;
> > > > +
> > > >  /* Cache of MSI-X setup plus extra mmap and memory region for split 
> > > > BAR map */
> > > >  typedef struct VFIOMSIXInfo {
> > > >      uint8_t table_bar;
> > > > @@ -1940,7 +1948,64 @@ static int vfio_dma_map(VFIOContainer 
> > > > *container, hwaddr iova,
> > > >  
> > > >  static bool vfio_listener_skipped_section(MemoryRegionSection *section)
> > > >  {
> > > > -    return !memory_region_is_ram(section->mr);
> > > > +    return !memory_region_is_ram(section->mr) &&
> > > > +        !memory_region_is_iommu(section->mr);
> > > > +}
> > > > +
> > > > +static void vfio_iommu_map_notify(Notifier *n, void *data)
> > > > +{
> > > > +    VFIOGuestIOMMU *giommu = container_of(n, VFIOGuestIOMMU, n);
> > > > +    MemoryRegion *iommu = giommu->iommu;
> > > > +    VFIOContainer *container = giommu->container;
> > > > +    IOMMUTLBEntry *iotlb = data;
> > > > +    MemoryRegionSection *mrs;
> > > > +    hwaddr xlat;
> > > > +    hwaddr len = iotlb->addr_mask + 1;
> > > > +    void *vaddr;
> > > > +    int ret;
> > > > +
> > > > +    DPRINTF("iommu map @ %"HWADDR_PRIx" - %"HWADDR_PRIx"\n",
> > > > +            iotlb->iova, iotlb->iova + iotlb->address_mask);
> > > > +
> > > > +    /* The IOMMU TLB entry we have just covers translation through
> > > > +     * this IOMMU to its immediate target.  We need to translate
> > > > +     * it the rest of the way through to memory. */
> > > > +    mrs = address_space_translate(iommu->iommu_target_as,
> > > > +                                  iotlb->translated_addr,
> > > > +                                  &xlat, &len, iotlb->perm[1]);
> > > > +    if (!memory_region_is_ram(mrs->mr)) {
> > > > +        DPRINTF("iommu map to non memory area %"HWADDR_PRIx"\n",
> > > > +                xlat);
> > > > +        return;
> > > > +    }
> > > > +    if (len & iotlb->addr_mask) {
> > > > +        DPRINTF("iommu has granularity incompatible with target AS\n");
> > > > +        return;
> > > > +    }
> > > > +
> > > > +    vaddr = memory_region_get_ram_ptr(mrs->mr) +
> > > > +        mrs->offset_within_region +
> > > > +        (xlat - mrs->offset_within_address_space);
> > > > +
> > > > +    if (iotlb->perm[0] || iotlb->perm[1]) {
> > > > +        ret = vfio_dma_map(container, iotlb->iova,
> > > > +                           iotlb->addr_mask + 1, vaddr,
> > > > +                           !iotlb->perm[1] || mrs->readonly);
> > > > +        if (ret) {
> > > > +            error_report("vfio_dma_map(%p, 0x%"HWADDR_PRIx", "
> > > > +                         "0x%"HWADDR_PRIx", %p) = %d (%m)",
> > > > +                         container, iotlb->iova,
> > > > +                         iotlb->addr_mask + 1, vaddr, ret);
> > > > +        }
> > > > +    } else {
> > > > +        ret = vfio_dma_unmap(container, iotlb->iova, iotlb->addr_mask 
> > > > + 1);
> > > > +        if (ret) {
> > > > +            error_report("vfio_dma_unmap(%p, 0x%"HWADDR_PRIx", "
> > > > +                         "0x%"HWADDR_PRIx") = %d (%m)",
> > > > +                         container, iotlb->iova,
> > > > +                         iotlb->addr_mask + 1, ret);
> > > > +        }
> > > > +    }
> > > >  }
> > > >  
> > > >  static void vfio_listener_region_add(MemoryListener *listener,
> > > > @@ -1949,11 +2014,8 @@ static void 
> > > > vfio_listener_region_add(MemoryListener *listener,
> > > >      VFIOContainer *container = container_of(listener, VFIOContainer,
> > > >                                              iommu_data.listener);
> > > >      hwaddr iova, end;
> > > > -    void *vaddr;
> > > >      int ret;
> > > >  
> > > > -    assert(!memory_region_is_iommu(section->mr));
> > > > -
> > > >      if (vfio_listener_skipped_section(section)) {
> > > >          DPRINTF("SKIPPING region_add %"HWADDR_PRIx" - %"PRIx64"\n",
> > > >                  section->offset_within_address_space,
> > > > @@ -1975,20 +2037,55 @@ static void 
> > > > vfio_listener_region_add(MemoryListener *listener,
> > > >          return;
> > > >      }
> > > >  
> > > > -    vaddr = memory_region_get_ram_ptr(section->mr) +
> > > > +    memory_region_ref(section->mr);
> > > > +
> > > > +    if (memory_region_is_ram(section->mr)) {
> > > > +        void *vaddr;
> > > > +
> > > > +        DPRINTF("region_add [ram] %"HWADDR_PRIx" - %"HWADDR_PRIx" 
> > > > [%p]\n",
> > > > +                iova, end - 1, vaddr);
> > > > +
> > > > +        vaddr = memory_region_get_ram_ptr(section->mr) +
> > > >              section->offset_within_region +
> > > >              (iova - section->offset_within_address_space);
> > > >  
> > > > -    DPRINTF("region_add %"HWADDR_PRIx" - %"HWADDR_PRIx" [%p]\n",
> > > > -            iova, end - 1, vaddr);
> > > >  
> > > > -    memory_region_ref(section->mr);
> > > > -    ret = vfio_dma_map(container, iova, end - iova, vaddr, 
> > > > section->readonly);
> > > > -    if (ret) {
> > > > -        error_report("vfio_dma_map(%p, 0x%"HWADDR_PRIx", "
> > > > -                     "0x%"HWADDR_PRIx", %p) = %d (%m)",
> > > > -                     container, iova, end - iova, vaddr, ret);
> > > > +        ret = vfio_dma_map(container, iova, end - iova, vaddr,
> > > > +                           section->readonly);
> > > > +        if (ret) {
> > > > +            error_report("vfio_dma_map(%p, 0x%"HWADDR_PRIx", "
> > > > +                         "0x%"HWADDR_PRIx", %p) = %d (%m)",
> > > > +                         container, iova, end - iova, vaddr, ret);
> > > > +        }
> > > > +    } else if (memory_region_is_iommu(section->mr)) {
> > > > +        VFIOGuestIOMMU *giommu;
> > > > +
> > > > +        DPRINTF("region_add [iommu] %"HWADDR_PRIx" - %"HWADDR_PRIx"\n",
> > > > +                iova, end - 1);
> > > > +
> > > > +        /*
> > > > +         * FIXME: We should do some checking to see if the
> > > > +         * capabilities of the host VFIO IOMMU are adequate to model
> > > > +         * the guest IOMMU
> > > > +         *
> > > > +         * FIXME: This assumes that the guest IOMMU is empty of
> > > > +         * mappings at this point - we should either enforce this, or
> > > > +         * loop through existing mappings to map them into VFIO.
> > > > +         *
> > > > +         * FIXME: For VFIO iommu types which have KVM acceleration to
> > > > +         * avoid bouncing all map/unmaps through qemu this way, this
> > > > +         * would be the right place to wire that up (tell the KVM
> > > > +         * device emulation the VFIO iommu handles to use).
> > > > +         */
> > > > +        giommu = g_malloc0(sizeof(*giommu));
> > > > +        giommu->iommu = section->mr;
> > > > +        giommu->container = container;
> > > > +        giommu->n.notify = vfio_iommu_map_notify;
> > > > +
> > > > +        QLIST_INSERT_HEAD(&container->guest_iommus, giommu, list);
> > > > +        memory_region_register_iommu_notifier(giommu->iommu, 
> > > > &giommu->n);
> > > 
> > > And this is also filtered on the space, so we're not adding iommus that
> > > aren't handling regions within this space, right?
> > 
> > I'm not sure what you're getting at.
> 
> Trying to make sure I understand why guest_iommus is a list.  We can
> have multiple guest iommus, but in the majority of those cases I would
> expect only one iommu per VFIOAddressSpace.  So if the listener is for
> this space, we only add the relevant iommu and not all of the iommus in
> the machine.

That's what we do - we only add notifiers for iommu regions that
appear within the relevant address space.  It's a list because at
least theoretically there could be more than one iommu region in the
AS, although I don't know of any real cases where that would be true.


>  Actually maybe two guest iommu ranges are common for spapr
> per space, a 32bit and a 64bit range?

I think 64-bit DMA windows on PAPR are usually just mapped to RAM with
a fixed offset, rather than having TCEs (page table).  That might well
introduce some extra complexities in how we mirror that into VFIO, but
it's not directly relevant to this point.

> > > Does the memory listener need to move to the space?
> > 
> > That would make this simpler, but it has other problems.  Having the
> > listener per-container means that when a container is added to a space
> > which already has a bunch of things in it, we automatically get those
> > replayed by the listener so we can set up the new container.
> 
> Ah, right, we need replay.  Ok, I think it's a reasonable proof of
> concept to start with.  Thanks,

So, replay raises some interesting issues itself.  At the moment we
have replay for the regions, and that's straightforward enough.  The
question is what we do about replay of mappings with guest iommus
themselves.

In the case where the guest IOMMU is essentially purely software
implemented, replay is fairly straightforward - we add a new hook to
mr->iommu_ops which scans the existing mappings and sends them to the
given notifier.

For spapr, though, PUT_TCE can be a real bottleneck, so at least for a
host bridge with only VFIO devices, we want to have PUT_TCE
implemented directly in KVM, with an ioctl() to wire up the PUT_TCE
liobn directly to the host IOMMU table underlying the VFIO container.
Replay is kind of a problem though, because qemu has no record of the
mappings to replay from.

In practice we can configure that problem away though - if we only put
one vfio group per guest host bridge, then the container will always
be wired up before any mappings are made.  But we need some way of
safely detecting the ok case and optimizing appropriately.

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]