qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v1 3/7] memory: iommu support


From: Benjamin Herrenschmidt
Subject: Re: [Qemu-devel] [RFC v1 3/7] memory: iommu support
Date: Fri, 12 Oct 2012 13:51:09 +1100

On Thu, 2012-10-11 at 15:57 +0200, Avi Kivity wrote:
> >> Map/unmap is supported via address_space_map(), which calls
> >> ->translate().  I don't see how a lower-level map/unmap helps,
> unless
> >> the hardware supplies such a function.
> > 
> > Yep, it's just the map/unmap callbacks that are not supported
> anymore,
> > but nobody uses that feature of DMAContext yet.
> 
> What do those callbacks it even mean? 

Well, the unmap callback was meant for notifying the device that did a
map() that the iommu has invalidated part of that mapping.

The rough idea was that the actual invalidations would be delayed until
all "previous" maps have gone away, which works fine without callbacks
for transcient maps (packet buffers ,etc...) but doesn't for long lived
ones.

So in addition, we would call that callback for devices who own long
lived maps, asking them to dispose of them (and eventually re-try them,
which might or might not fail depending on why the invalidation occurred
in the first place).

The invalidation would still be delayed until the last old map has gone
away, so it's not a synchronous callback, more like a notification to
the device to wakeup & do something.

But in the latest patches that went in, because the whole scheme was too
complex and not really that useful, I ripped out the whole map tracking
etc... I kept the unmap callback API there in case we want to re-do it
more sanely.

When emulating HW iommu's the "invalidation not complete" is easy to
report asynchronously to the guest via a status bit that the guest is
supposdly polling after doing an invalidation request.

On something like synchronous hcalls (PAPR), the idea was to delay the
hcall completion by suspending the cpu who issued it.

A lot of pain for what is essentially a corner case that doesn't happen
in practice... unless we start doing mapping games.

By mapping games, I mean having an emulated device MMIO space being
mapped into user space in a way where the kernel might change the
mapping "live" (for example to point to backup memory as it migrates
thing away, etc...). This kind of stuff typically happens with graphics
where graphic objects can move between memory and vram.

Cheers,
Ben




reply via email to

[Prev in Thread] Current Thread [Next in Thread]