qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH RFC 0/4] sPAPR: Support multiple PEs in one PHB


From: Gavin Shan
Subject: [Qemu-devel] [PATCH RFC 0/4] sPAPR: Support multiple PEs in one PHB
Date: Fri, 18 Sep 2015 16:30:12 +1000

This patchset bases on David Gibson's git tree: 
git://github.com/dgibson/qemu.git
(branch: vfio). And it requires host kernel changes which is being reviewed
this moment.

https://patchwork.ozlabs.org/patch/519135/
https://patchwork.ozlabs.org/patch/519136/

Currently, EEH works with the assumption that every sPAPRPHBState, which
is associated with VFIO container in VFIO case, only has one attached
IOMMU group (PE). The request EEH opertion (like reset) is applied to
all PEs attached to the specified sPAPRPHBState. It breaks the affected
boundary of the EEH operation if the sPAPRPHBState supports multiple
IOMMU groups (PEs).

The patchset intends to resolve above issue by using the newly exposed
EEH v2 API interface, which accepts IOMMU group (PE) to specify the
affected domain of the requested EEH operation: Every PE is identified
with PE address, which is the (PE's primary bus ID + 1) previously.
After this patchset, it's changed to (IOMMU group ID + 1). The PE adress
is passed on every requested EEH operation from guest so that it can
be passed to host to affect the target PE only.

Gavin Shan (4):
  linux-headers: Sync vfio.h
  VFIO: Introduce vfio_get_group_id()
  sPAPR: Support multiple IOMMU groups in PHB for EEH operations
  sPAPR: Remove EEH callbacks in sPAPRPHBClass

 hw/ppc/spapr_pci.c          | 68 ++++++++++++-------------------------
 hw/ppc/spapr_pci_vfio.c     | 83 +++++++++++++++++++++++++++++++++------------
 hw/vfio/pci.c               | 12 +++++++
 include/hw/pci-host/spapr.h | 11 +++---
 include/hw/vfio/vfio.h      |  1 +
 linux-headers/linux/vfio.h  |  6 ++++
 6 files changed, 110 insertions(+), 71 deletions(-)

-- 
2.1.0




reply via email to

[Prev in Thread] Current Thread [Next in Thread]