qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH qemu v14 17/18] vfio/spapr: Use VFIO_SPAPR_TCE_v


From: Alexey Kardashevskiy
Subject: Re: [Qemu-devel] [PATCH qemu v14 17/18] vfio/spapr: Use VFIO_SPAPR_TCE_v2_IOMMU
Date: Tue, 22 Mar 2016 16:54:07 +1100
User-agent: Mozilla/5.0 (X11; Linux i686 on x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.7.0

On 03/22/2016 04:14 PM, David Gibson wrote:
On Mon, Mar 21, 2016 at 06:47:05PM +1100, Alexey Kardashevskiy wrote:
New VFIO_SPAPR_TCE_v2_IOMMU type supports dynamic DMA window management.
This adds ability to VFIO common code to dynamically allocate/remove
DMA windows in the host kernel when new VFIO container is added/removed.

This adds VFIO_IOMMU_SPAPR_TCE_CREATE ioctl to vfio_listener_region_add
and adds just created IOMMU into the host IOMMU list; the opposite
action is taken in vfio_listener_region_del.

When creating a new window, this uses euristic to decide on the TCE table
levels number.

This should cause no guest visible change in behavior.

Signed-off-by: Alexey Kardashevskiy <address@hidden>
---
Changes:
v14:
* new to the series

---
TODO:
* export levels to PHB
---
  hw/vfio/common.c | 108 ++++++++++++++++++++++++++++++++++++++++++++++++++++---
  trace-events     |   2 ++
  2 files changed, 105 insertions(+), 5 deletions(-)

diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 4e873b7..421d6eb 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -279,6 +279,14 @@ static int vfio_host_iommu_add(VFIOContainer *container,
      return 0;
  }

+static void vfio_host_iommu_del(VFIOContainer *container, hwaddr min_iova)
+{
+    VFIOHostIOMMU *hiommu = vfio_host_iommu_lookup(container, min_iova, 
0x1000);

The hard-coded 0x1000 looks dubious..

Well, that's the minimal page size...



+    g_assert(hiommu);
+    QLIST_REMOVE(hiommu, hiommu_next);
+}
+
  static bool vfio_listener_skipped_section(MemoryRegionSection *section)
  {
      return (!memory_region_is_ram(section->mr) &&
@@ -392,6 +400,61 @@ static void vfio_listener_region_add(MemoryListener 
*listener,
      }
      end = int128_get64(llend);

+    if (container->iommu_type == VFIO_SPAPR_TCE_v2_IOMMU) {

I think this would be clearer split out into a helper function,
vfio_create_host_window() or something.


It is rather vfio_spapr_create_host_window() and we were avoiding xxx_spapr_xxx so far. I'd cut-n-paste the SPAPR PCI AS listener to a separate file but this usually triggers more discussion and never ends well.



+        unsigned entries, pages;
+        struct vfio_iommu_spapr_tce_create create = { .argsz = sizeof(create) 
};
+
+        g_assert(section->mr->iommu_ops);
+        g_assert(memory_region_is_iommu(section->mr));

I don't think you need these asserts.  AFAICT the same logic should
work if a RAM MR was added directly to PCI address space - this would
create the new host window, then the existing code for adding a RAM MR
would map that block of RAM statically into the new window.

In what configuration/machine can we do that on SPAPR?


+        trace_vfio_listener_region_add_iommu(iova, end - 1);
+        /*
+         * FIXME: For VFIO iommu types which have KVM acceleration to
+         * avoid bouncing all map/unmaps through qemu this way, this
+         * would be the right place to wire that up (tell the KVM
+         * device emulation the VFIO iommu handles to use).
+         */
+        create.window_size = memory_region_size(section->mr);
+        create.page_shift =
+                ctz64(section->mr->iommu_ops->get_page_sizes(section->mr));

Ah.. except that I guess you'd need to fall back to host page size
here to handle a RAM MR.

Can you give an example of such RAM MR being added to PCI AS on SPAPR?


+        /*
+         * SPAPR host supports multilevel TCE tables, there is some
+         * euristic to decide how many levels we want for our table:
+         * 0..64 = 1; 65..4096 = 2; 4097..262144 = 3; 262145.. = 4
+         */
+        entries = create.window_size >> create.page_shift;
+        pages = (entries * sizeof(uint64_t)) / getpagesize();
+        create.levels = ctz64(pow2ceil(pages) - 1) / 6 + 1;
+
+        ret = ioctl(container->fd, VFIO_IOMMU_SPAPR_TCE_CREATE, &create);
+        if (ret) {
+            error_report("Failed to create a window, ret = %d (%m)", ret);
+            goto fail;
+        }
+
+        if (create.start_addr != section->offset_within_address_space ||
+            vfio_host_iommu_lookup(container, create.start_addr,
+                                   create.start_addr + create.window_size - 
1)) {

Under what circumstances can this trigger?  Is the kernel ioctl
allowed to return a different window start address than the one
requested?

You already asked this some time ago :) The userspace cannot request address, the host kernel returns one.


The second check looks very strange - if it returns true doesn't that
mean you *do* have host window which can accomodate this guest region,
which is what you want?

This should not happen, this is what this check is for. Can make it assert() or something like this.



+            struct vfio_iommu_spapr_tce_remove remove = {
+                .argsz = sizeof(remove),
+                .start_addr = create.start_addr
+            };
+            error_report("Host doesn't support DMA window at %"HWADDR_PRIx", must 
be %"PRIx64,
+                         section->offset_within_address_space,
+                         create.start_addr);
+            ioctl(container->fd, VFIO_IOMMU_SPAPR_TCE_REMOVE, &remove);
+            ret = -EINVAL;
+            goto fail;
+        }
+        trace_vfio_spapr_create_window(create.page_shift,
+                                       create.window_size,
+                                       create.start_addr);
+
+        vfio_host_iommu_add(container, create.start_addr,
+                            create.start_addr + create.window_size - 1,
+                            1ULL << create.page_shift);
+    }
+
      if (!vfio_host_iommu_lookup(container, iova, end - 1)) {
          error_report("vfio: IOMMU container %p can't map guest IOVA region"
                       " 0x%"HWADDR_PRIx"..0x%"HWADDR_PRIx,
@@ -525,6 +588,22 @@ static void vfio_listener_region_del(MemoryListener 
*listener,
                       container, iova, end - iova, ret);
      }

+    if (container->iommu_type == VFIO_SPAPR_TCE_v2_IOMMU) {
+        struct vfio_iommu_spapr_tce_remove remove = {
+            .argsz = sizeof(remove),
+            .start_addr = section->offset_within_address_space,
+        };
+        ret = ioctl(container->fd, VFIO_IOMMU_SPAPR_TCE_REMOVE, &remove);
+        if (ret) {
+            error_report("Failed to remove window at %"PRIx64,
+                         remove.start_addr);
+        }
+
+        vfio_host_iommu_del(container, section->offset_within_address_space);
+
+        trace_vfio_spapr_remove_window(remove.start_addr);
+    }
+
      if (iommu && iommu->iommu_ops && iommu->iommu_ops->vfio_stop) {
          iommu->iommu_ops->vfio_stop(section->mr);
      }
@@ -928,11 +1007,30 @@ static int vfio_connect_container(VFIOGroup *group, 
AddressSpace *as)
              goto listener_release_exit;
          }

-        /* The default table uses 4K pages */
-        vfio_host_iommu_add(container, info.dma32_window_start,
-                            info.dma32_window_start +
-                            info.dma32_window_size - 1,
-                            0x1000);
+        if (v2) {
+            /*
+             * There is a default window in just created container.
+             * To make region_add/del simpler, we better remove this
+             * window now and let those iommu_listener callbacks
+             * create/remove them when needed.
+             */
+            struct vfio_iommu_spapr_tce_remove remove = {
+                .argsz = sizeof(remove),
+                .start_addr = info.dma32_window_start,
+            };
+            ret = ioctl(fd, VFIO_IOMMU_SPAPR_TCE_REMOVE, &remove);
+            if (ret) {
+                error_report("vfio: VFIO_IOMMU_SPAPR_TCE_REMOVE failed: %m");
+                ret = -errno;
+                goto free_container_exit;
+            }
+        } else {
+            /* The default table uses 4K pages */
+            vfio_host_iommu_add(container, info.dma32_window_start,
+                                info.dma32_window_start +
+                                info.dma32_window_size - 1,
+                                0x1000);
+        }
      } else {
          error_report("vfio: No available IOMMU models");
          ret = -EINVAL;
diff --git a/trace-events b/trace-events
index cc619e1..f2b75a3 100644
--- a/trace-events
+++ b/trace-events
@@ -1736,6 +1736,8 @@ vfio_region_finalize(const char *name, int index) "Device %s, 
region %d"
  vfio_region_mmaps_set_enabled(const char *name, bool enabled) "Region %s mmaps 
enabled: %d"
  vfio_ram_register(uint64_t va, uint64_t size, int ret) "va=%"PRIx64" 
size=%"PRIx64" ret=%d"
  vfio_ram_unregister(uint64_t va, uint64_t size, int ret) "va=%"PRIx64" 
size=%"PRIx64" ret=%d"
+vfio_spapr_create_window(int ps, uint64_t ws, uint64_t off) "pageshift=0x%x 
winsize=0x%"PRIx64" offset=0x%"PRIx64
+vfio_spapr_remove_window(uint64_t off) "offset=%"PRIx64

  # hw/vfio/platform.c
  vfio_platform_base_device_init(char *name, int groupid) "%s belongs to group 
#%d"



--
Alexey



reply via email to

[Prev in Thread] Current Thread [Next in Thread]