qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v3 3/3] VFIO Type1 IOMMU change: to support


From: Jike Song
Subject: Re: [Qemu-devel] [RFC PATCH v3 3/3] VFIO Type1 IOMMU change: to support with iommu and without iommu
Date: Tue, 10 May 2016 15:52:27 +0800
User-agent: Mozilla/5.0 (X11; Linux i686 on x86_64; rv:17.0) Gecko/20130801 Thunderbird/17.0.8

On 05/05/2016 05:27 PM, Tian, Kevin wrote:
>> From: Song, Jike
>>
>> IIUC, an api-only domain is a VFIO domain *without* underlying IOMMU
>> hardware. It just, as you said in another mail, "rather than
>> programming them into an IOMMU for a device, it simply stores the
>> translations for use by later requests".
>>
>> That imposes a constraint on gfx driver: hardware IOMMU must be disabled.
>> Otherwise, if IOMMU is present, the gfx driver eventually programs
>> the hardware IOMMU with IOVA returned by pci_map_page or dma_map_page;
>> Meanwhile, the IOMMU backend for vgpu only maintains GPA <-> HPA
>> translations without any knowledge about hardware IOMMU, how is the
>> device model supposed to do to get an IOVA for a given GPA (thereby HPA
>> by the IOMMU backend here)?
>>
>> If things go as guessed above, as vfio_pin_pages() indicates, it
>> pin & translate vaddr to PFN, then it will be very difficult for the
>> device model to figure out:
>>
>>      1, for a given GPA, how to avoid calling dma_map_page multiple times?
>>      2, for which page to call dma_unmap_page?
>>
>> --
> 
> We have to support both w/ iommu and w/o iommu case, since
> that fact is out of GPU driver control. A simple way is to use
> dma_map_page which internally will cope with w/ and w/o iommu
> case gracefully, i.e. return HPA w/o iommu and IOVA w/ iommu.
> Then in this file we only need to cache GPA to whatever dmadr_t
> returned by dma_map_page.
> 

Hi Alex, Kirti and Neo, any thought on the IOMMU compatibility here?

--
Thanks,
Jike




reply via email to

[Prev in Thread] Current Thread [Next in Thread]