qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 09/10] intel-iommu: don't unmap all for shado


From: Auger Eric
Subject: Re: [Qemu-devel] [PATCH v2 09/10] intel-iommu: don't unmap all for shadow page table
Date: Fri, 18 May 2018 09:31:03 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0

Hi Peter,

On 05/18/2018 08:06 AM, Peter Xu wrote:
> On Thu, May 17, 2018 at 07:23:33PM +0200, Auger Eric wrote:
>> Hi Peter,
>>
>> On 05/04/2018 05:08 AM, Peter Xu wrote:
>>> IOMMU replay was carried out before in many use cases, e.g., context
>>> cache invalidations, domain flushes.  We used this mechanism to sync the
>>> shadow page table by firstly (1) unmap the whole address space, then
>>> (2) walk the page table to remap what's in the table.
>>>
>>> This is very dangerous.
>>>
>>> The problem is that we'll have a very small window (in my measurement,
>>> it can be about 3ms) during above step (1) and (2) that the device will
>>> see no (or incomplete) device page table.  Howerver the device never
>>> knows that.  This can cause DMA error of devices, who assumes the page
>>> table is always there.
>> But now you have the QemuMutex can we have a translate and an
>> invalidation that occur concurrently? Don't the iotlb flush and replay
>> happen while the lock is held?
> 
> Note that when we are using vfio-pci devices we can't really know when
> the device started a DMA since that's totally happening only between
> the host IOMMU and the hardware.  

Oh yes that's fully relevant in vfio-pci use case. thank you for the
clarification.

Say, vfio-pci device page
> translation is happening in the shadow page table, not really in QEMU.
> So IMO we aren't protected by anything.
> 
> Also, this patch is dropped in version 3, and I did something else to
> achieve similar goal (by introducing the shadow page sync helper, and
> then for DSIs we'll use that instead of calling IOMMU replay here).
> Please have a look.  Thanks,

OK

Thanks

Eric
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]