qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v3 3/3] VFIO Type1 IOMMU change: to support


From: Neo Jia
Subject: Re: [Qemu-devel] [RFC PATCH v3 3/3] VFIO Type1 IOMMU change: to support with iommu and without iommu
Date: Fri, 13 May 2016 01:41:50 -0700
User-agent: Mutt/1.5.24 (2015-08-30)

On Fri, May 13, 2016 at 08:02:41AM +0000, Tian, Kevin wrote:
> > From: Neo Jia [mailto:address@hidden
> > Sent: Friday, May 13, 2016 3:38 PM
> > 
> > On Fri, May 13, 2016 at 07:13:44AM +0000, Tian, Kevin wrote:
> > > > From: Neo Jia [mailto:address@hidden
> > > > Sent: Friday, May 13, 2016 2:42 PM
> > > >
> > > >
> > > > >
> > > > > We possibly have the same requirement from the mediate driver backend:
> > > > >
> > > > >       a) get a GFN, when guest try to tell hardware;
> > > > >       b) consult the vfio iommu with that GFN[1]: will you find me a 
> > > > > proper dma_addr?
> > > >
> > > > We will provide you the pfn via vfio_pin_pages, so you can map it for 
> > > > dma
> > > > purpose in your i915 driver, which is what we are doing today.
> > > >
> > >
> > > Can such 'map' operation be consolidated in vGPU core driver? I don't 
> > > think
> > > Intel vGPU driver has any feature proactively relying on iommu. The reason
> > > why we keep talking iommu is just because the kernel may enable iommu
> > > for physical GPU so we need make sure our device model can work in such
> > > configuration. And this requirement should apply to all vendors, not Intel
> > > specific (like you said you are doing it already today).
> > 
> > Hi Kevin,
> > 
> > Actually, such requirement is already satisfied today as all vendor drivers
> > should transparently work with and without system iommu on bare-metal, 
> > right?
> > 
> > So I don't see any new requirement here, also such consolidation doesn't 
> > help
> > any but adding complexity to the system as vendor driver will not remove
> > their own dma_map_xxx functions as they are still required to support
> > non-mediated cases.
> > 
> 
> Thanks for your information, which makes it clearer where the difference is. 
> :-)
> 
> Based on your description, looks you treat guest pages same as normal process
> pages, which all share the same code path when mapping as DMA target, so it
> is pointless to separate guest page map out to vGPU core driver. Is this
> understanding correct?

Yes.

It is Linux's responsibility to allocate the physical pages for the QEMU process
which will happen to be the guest physical memory that we might use as DMA
target. From the device point of view, it is just some physical location he
needs to hit.

> 
> In our side, so far guest pages are treated differently from normal process
> pages, which is the main reason why I asked whether we can consolidate that
> part. Looks now it's not necessary since it's already not a common 
> requirement.

> 
> One additional question though. Jike already mentioned the need to shadow
> GPU MMU (called GTT table in Intel side) in our device model. 'shadow' here
> basically means we need translate from 'gfn' in guest pte to 'dmadr_t'
> as returned by dma_map_xxx. Based on gfn->pfn translation provided by
> VFIO (in your v3 driver), gfn->dmadr_t mapping can be constructed accordingly
> in the vendor driver. So do you have similar requirement like this? If yes, do
> you think any value to unify that translation structure or prefer to maintain
> it by vendor driver?

Yes, I think it would make sense to do this in the vendor driver as it keeps the
iommu type1 clean - it will only track the gfn to pfn translation/pinning (on
CPU). Then, you can reuse your existing driver code to map the pfn as DMA
target.

Also you can do some kind of optimization such as keeping a small cache within
your device driver, if the gfn is already translated, no need to query again.

Thanks,
Neo

> 
> Thanks
> Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]