qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/9] spapr_iommu: Enable multiple TCE requests


From: Alexander Graf
Subject: Re: [Qemu-devel] [PATCH 2/9] spapr_iommu: Enable multiple TCE requests
Date: Wed, 21 May 2014 16:37:30 +0200
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:24.0) Gecko/20100101 Thunderbird/24.5.0


On 21.05.14 16:21, Alexey Kardashevskiy wrote:
Currently only single TCE entry per request is supported (H_PUT_TCE).
However PAPR+ specification allows multiple entry requests such as
H_PUT_TCE_INDIRECT and H_STUFF_TCE. Having less transitions to the host
kernel via ioctls, support of these calls can accelerate IOMMU operations.

This implements H_STUFF_TCE and H_PUT_TCE_INDIRECT.

This advertises "multi-tce" capability to the guest if the host kernel
supports it (KVM_CAP_SPAPR_MULTITCE) or guest is running in TCG mode.

Signed-off-by: Alexey Kardashevskiy <address@hidden>
---
Changes:
v1:
* removed checks for liobn as the check is performed already in
spapr_tce_find_by_liobn
* added hcall-multi-tce if the host kernel supports the capability
---
  hw/ppc/spapr.c       |  3 ++
  hw/ppc/spapr_iommu.c | 78 ++++++++++++++++++++++++++++++++++++++++++++++++++++
  target-ppc/kvm.c     |  7 +++++
  target-ppc/kvm_ppc.h |  7 +++++
  trace-events         |  2 ++
  5 files changed, 97 insertions(+)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index e174e04..66929cb 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -500,6 +500,9 @@ static void *spapr_create_fdt_skel(hwaddr initrd_base,
      /* RTAS */
      _FDT((fdt_begin_node(fdt, "rtas")));
+ if (kvmppc_spapr_use_multitce()) {

Sorry I didn't realize this earlier. I think it's more obvious to the reader if the "enabled for TCG" logic is not hidden in some other function:

  if (!kvm_enabled() || kvmppc_supports_multitce()) {

+        SPAPR_HYPERRTAS_ADD("hcall-multi-tce");
+    }
      _FDT((fdt_property(fdt, "ibm,hypertas-functions", hypertas_prop,
                         hypertas_prop_len)));
      _FDT((fdt_property(fdt, "qemu,hypertas-functions", qemu_hypertas_prop,
diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
index 72493d8..ab5037c 100644
--- a/hw/ppc/spapr_iommu.c
+++ b/hw/ppc/spapr_iommu.c
@@ -224,6 +224,82 @@ static target_ulong put_tce_emu(sPAPRTCETable *tcet, 
target_ulong ioba,
      return H_SUCCESS;
  }
+static target_ulong h_put_tce_indirect(PowerPCCPU *cpu,
+                                       sPAPREnvironment *spapr,
+                                       target_ulong opcode, target_ulong *args)
+{
+    int i;
+    target_ulong liobn = args[0];
+    target_ulong ioba = args[1];
+    target_ulong ioba1 = ioba;
+    target_ulong tce_list = args[2];
+    target_ulong npages = args[3];
+    target_ulong ret = H_PARAMETER;
+    sPAPRTCETable *tcet = spapr_tce_find_by_liobn(liobn);
+    CPUState *cs = CPU(cpu);
+
+    if (!tcet) {
+        return H_PARAMETER;
+    }
+
+    if (npages > 512) {
+        return H_PARAMETER;
+    }
+
+    ioba &= ~SPAPR_TCE_PAGE_MASK;
+    tce_list &= ~SPAPR_TCE_PAGE_MASK;
+
+    for (i = 0; i < npages; ++i, ioba += SPAPR_TCE_PAGE_SIZE) {
+        target_ulong tce = ldq_phys(cs->as, tce_list +

Is this one of those cases where the guest may expect us to mask the upper bits again? It's not an rtas call after all. What does sPAPR say?


Alex




reply via email to

[Prev in Thread] Current Thread [Next in Thread]