qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH v3 0/2] Inter-VM shared memory PCI device


From: Cam Macdonell
Subject: [Qemu-devel] Re: [PATCH v3 0/2] Inter-VM shared memory PCI device
Date: Thu, 25 Mar 2010 12:17:56 -0600

On Thu, Mar 25, 2010 at 11:48 AM, Avi Kivity <address@hidden> wrote:
> On 03/25/2010 07:35 PM, Cam Macdonell wrote:
>>
>>> Ah, I see.  You adjusted for the different behaviours in the driver.
>>>
>>> Still I recommend dropping the status register: this allows single-msi
>>> and
>>> PIRQ to behave the same way.  Also it is racy, if two guests signal a
>>> third,
>>> they will overwrite each other's status.
>>>
>>
>> With shared interrupts with PIRQ without a status register how does a
>> device know it generated the interrupt?
>>
>
> Right, you need a status register.  Just don't add any more information,
> since MSI cannot carry any data.

Right.

>
>>> Eventfd values are a counter, not a register.  A read() on the other side
>>> returns the sum of all write()s (or eventfd_signal()s).  In the context
>>> of
>>> irqfd it just means the number of interrupts we coalesced.
>>>
>>> Multivalue was considered at one time for a different need and rejected.
>>>  Really, to solve the race you need a queue, and that can only be done in
>>> the shared memory segment using locked instructions.
>>>
>>
>> I had a hunch it was probably considered.  That explains why irqfd
>> doesn't have a datamatch field.  I guess supporting multiple MSI
>> vectors with one doorbell per guest isn't possible if one 1 bit of
>> information can be communicated.
>>
>
> Actually you can have one doorbell supporting multiple vectors and guests,
> simply divide the data value into two bit fields, one for the vector and one
> for the guest.  A single write gets both values into the host, which can
> then use datamatch to trigger the correct eventfd (which is wired to an
> irqfd in another guest).

At 4-bits per guest, a single write is then limited to 8 guests (with
32-bit registers), we could got to 64-bit.

>
>> So, ioeventfd/irqfd restricts MSI to 1 vector between guests.  Should
>> multi-MSI even be supported then in the non-ioeventfd/irq case?
>> Otherwise ioeventfd/irqfd become more than an implementation detail.
>>
>
> I lost you.  Please re-explain.

An irqfd can only trigger a single vector in a guest.  Right now I
only have one eventfd per guest.    So ioeventfd/irqfd restricts the
current implementation to a single vector that a guest can trigger.
Without irqfd, eventfds can be used like registers a write the number
of the vector they want to trigger, but as you point out it is racy.

So, supporting multiple vectors via irqfd requires multiple eventfds
for each guest (one per vector).   a total of (# of guests) X (# of
vectors) are required.  If we're limited to 8 or 16 guests that's not
too bad, but since the server opens them all we're restricted to 1024,
but that's a pretty high ceiling for this purpose.

>
>
>
> --
> Do not meddle in the internals of kernels, for they are subtle and quick to
> panic.
>
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]