qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-arm] [Qemu-devel] [RFC v2 0/8] VIRTIO-IOMMU device


From: Will Deacon
Subject: Re: [Qemu-arm] [Qemu-devel] [RFC v2 0/8] VIRTIO-IOMMU device
Date: Tue, 27 Jun 2017 09:46:42 +0100
User-agent: Mutt/1.5.23 (2014-03-12)

Hi Eric,

On Tue, Jun 27, 2017 at 08:38:48AM +0200, Auger Eric wrote:
> On 26/06/2017 18:13, Jean-Philippe Brucker wrote:
> > On 26/06/17 09:22, Auger Eric wrote:
> >> On 19/06/2017 12:15, Jean-Philippe Brucker wrote:
> >>> On 19/06/17 08:54, Bharat Bhushan wrote:
> >>>> I started added replay in virtio-iommu and came across how MSI 
> >>>> interrupts with work with VFIO. 
> >>>> I understand that on intel this works differently but vsmmu will have 
> >>>> same requirement. 
> >>>> kvm-msi-irq-route are added using the msi-address to be translated by 
> >>>> viommu and not the final translated address.
> >>>> While currently the irqfd framework does not know about emulated iommus 
> >>>> (virtio-iommu, vsmmuv3/vintel-iommu).
> >>>> So in my view we have following options:
> >>>> - Programming with translated address when setting up kvm-msi-irq-route
> >>>> - Route the interrupts via QEMU, which is bad from performance
> >>>> - vhost-virtio-iommu may solve the problem in long term
> >>>>
> >>>> Is there any other better option I am missing?
> >>>
> >>> Since we're on the topic of MSIs... I'm currently trying to figure out how
> >>> we'll handle MSIs in the nested translation mode, where the guest manages
> >>> S1 page tables and the host doesn't know about GVA->GPA translation.
> >>
> >> I have a question about the "nested translation mode" terminology. Do
> >> you mean in that case you use stage 1 + stage 2 of the physical IOMMU
> >> (which the ARM spec normally advises or was meant for) or do you mean
> >> stage 1 implemented in vIOMMU and stage 2 implemented in pIOMMU. At the
> >> moment my understanding is for VFIO integration the pIOMMU uses a single
> >> stage combining both the stage 1 and stage2 mappings but the host is not
> >> aware of those 2 stages.
> > 
> > Yes at the moment the VMM merges stage-1 (GVA->GPA) from the guest with
> > its stage-2 mappings (GPA->HPA) and creates a stage-2 mapping (GVA->HPA)
> > in the pIOMMU via VFIO_IOMMU_MAP_DMA. stage-1 is disabled in the pIOMMU.
> > 
> > What I mean by "nested mode" is stage 1 + stage 2 in the physical IOMMU.
> > I'm referring to the "Page Table Sharing" bit of the Future Work in the
> > initial RFC for virtio-iommu [1], and also PASID table binding [2] in the
> > case of vSMMU. In that mode, stage-1 page tables in the pIOMMU are managed
> > by the guest, and the VMM only maps GPA->HPA.
> 
> OK I need to read that part more thoroughly. I was told in the past
> handling nested stages at pIOMMU was considered too complex and
> difficult to maintain. But definitively The SMMU architecture is devised
> for that. Michael asked why we did not use that already for vsmmu
> (nested stages are used on AMD IOMMU I think).

Curious -- but what gave you that idea? I worry that something I might have
said wasn't clear or has been misunderstood.

Will



reply via email to

[Prev in Thread] Current Thread [Next in Thread]