qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v1 3/7] memory: iommu support


From: Benjamin Herrenschmidt
Subject: Re: [Qemu-devel] [RFC v1 3/7] memory: iommu support
Date: Sat, 13 Oct 2012 22:37:32 +1100

On Sat, 2012-10-13 at 09:30 +0000, Blue Swirl wrote:

> > The problem with map and unmap is invalidations. How do you convey to
> > the devices having done a map that the guest has invalidated a
> > translation entry.
> 
> Also in Sparc32, IOMMU uses a table in RAM which the guest can change,
> so a callback to update the translation tables should be available. On
> Sparc64 there's IOTLB but also a fallback to TSB translation table in
> memory. We could rely on the guest issuing demaps/flushes when the
> memory changes and invalidate the translations then.

Right, the table's in memory on power too, but such tables generally
also have a cache (TLB) with some MMIO based logic to perform
invalidations.

Typically that logic involves a bit to perform a TLB kill and some
status bit to read back to get confirmation that the flush is completed.
In that case we can probably delay that later status bit until all the
maps we kept track of are gone ....

 ... but that means tracking them which is expensive.

Also the IBM iommu's are nasty here... some of them, if we ever emulate
them, actually participate in the fabric coherency protocol and thus
don't require an explicit MMIO for invalidations.

So if we were to emulate such HW we would have to intercept accesses to
the portion of RAM that is configured as an iommu table.

Thankfully we only emulate those machines as "paravirt" with a
hypervisor interface to the iommu (aka TCEs) so we are fine for now.
Also if we ever emulate the real HW, well, the latter models don't do
that anymore (but their MMIO for killing the cache doesn't have a status
bit either, the kill is that the latency of a simple read back is
enough).

Overall, a bloody can of worms... under the rug sounds like a nice place
to leave it for now :-)

Cheers,
Ben.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]