qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 08/13] iommu: Introduce IOMMU emulation infrastr


From: Anthony Liguori
Subject: Re: [Qemu-devel] [PATCH 08/13] iommu: Introduce IOMMU emulation infrastructure
Date: Tue, 15 May 2012 19:54:40 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:11.0) Gecko/20120329 Thunderbird/11.0.1

On 05/15/2012 07:41 PM, Benjamin Herrenschmidt wrote:
On Tue, 2012-05-15 at 18:58 -0500, Anthony Liguori wrote:

Even ancient PIO devices really don't block indefinitely.

In our case (TCEs) it's a hypervisor call, not an MMIO op, so to some
extent it's even more likely to do "blocking" things.

Yes, so I think the right thing to do is not model hypercalls for sPAPR as
synchronous calls but rather as asynchronous calls.  Obviously, simply ones can
use a synchronous implementation...

This is a matter of setting hlt=1 before dispatching the hypercall and passing a
continuation to the call that when executed, prepare the CPUState for the
hypercall return and then set hlt=0 to resume the CPU.

Is there any reason not to set that hlt after the dispatch ? IE. from
within the hypercall, for the very few that want to do asynchronous
completion, do something like spapr_hcall_suspend() before returning ?

You certainly could do that but it may get a little weird dealing with the return path. You'd have to return something like -EWOULDBLOCK and make sure you handle that in the dispatch code appropriately.

It would have been possible to implement a "busy" return status with the
guest having to try again, unfortunately that's not how Linux has
implemented it, so we are stuck with the current semantics.

Now, if you think that dropping the lock isn't good, what do you reckon
I should do ?

Add a reference count to dma map calls and a flush_pending flag.  If
flush_pending&&  ref>  0, return NULL for all map calls.

Decrement ref on unmap and if ref = 0 and flush_pending, clear flush_pending.
You could add a flush_notifier too for this event.

dma_flush() sets flush_pending if ref>  0.  Your TCE flush hypercall would
register for flush notifications and squirrel away the hypercall completion
continuation.

Ok, I'll look into it, thanks. Any good example to look at for how that
continuation stuff works ?

Just a callback and an opaque.  You could look at the AIOCB's in the block 
layer.

Regards,

Anthony Liguori

VT-d actually has a concept of a invalidation completion queue which delivers
interrupt based notification of invalidation completion events.  The above
flush_notify would be the natural way to support this since in this case, there
is no VCPU event that's directly involved in the completion event.

Cheers,
Ben.

Regards,

Anthony Liguori

Cheers,
Ben.








reply via email to

[Prev in Thread] Current Thread [Next in Thread]