qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-arm] [Qemu-devel] [RFC v2 0/8] VIRTIO-IOMMU device


From: Tian, Kevin
Subject: Re: [Qemu-arm] [Qemu-devel] [RFC v2 0/8] VIRTIO-IOMMU device
Date: Fri, 14 Jul 2017 06:58:03 +0000

> From: Jean-Philippe Brucker [mailto:address@hidden
> Sent: Friday, July 7, 2017 11:15 PM
> 
> Hi Ashish,
> 
> On 07/07/17 00:33, Tian, Kevin wrote:
> >> From: Kalra, Ashish [mailto:address@hidden
> >> Sent: Friday, July 7, 2017 7:24 AM
> >>
> >> I have a generic question on vIOMMU support, is there any proposal/plan
> to
> >> add ATS/PRI extension support to vIOMMUs and allow
> >> handling for end to end (v)IOMMU Page faults (w/t the device side
> >> implementation on Vhost) ?
> >>
> >> Again, the motivation will be to do DMA on paged guest memory and
> >> potentially avoiding the requirement of pinned/locked
> >> guest physical memory for DMA.
> >
> > yes, that's a necessary part to support SVM in both virtio-iommu
> > approach and fully emulated approach (e.g. for Intel VTd). There
> > are already patches and discussions in other thread about how to
> > propagate IOMMU page fault to vIOMMU. Then after it is done
> > vIOMMU page fault emulation will be further added.
> >
> > https://lkml.org/lkml/2017/6/27/964
> 
> For virtio-iommu, I'd like to add an event virtqueue for the device to
> send page faults to the driver, in a format similar to a PRI Page Request.
> The driver would then send a reply via the request virtqueue in a format
> similar to a PRG Response.
> 
> In Qemu the device implementation would hopefully be based on the same
> mechanism as VTd. The vhost implementation would receive IO Page Faults
> from VFIO and forward them on the event virtqueue similarly to the
> userspace implementation.
> 

Agree. I expect the path between Qemu and VFIO are general enough
for both emulated IOMMU and virtio-IOMMU. Difference is on the 
propagation path to guest based on the definition of different virtual 
interfaces.

Thanks
Kevin

reply via email to

[Prev in Thread] Current Thread [Next in Thread]