qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH V14 2/7] Add TPM (frontend) hardware interface (


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH V14 2/7] Add TPM (frontend) hardware interface (TPM TIS) to Qemu
Date: Mon, 5 Mar 2012 00:59:02 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Fri, Mar 02, 2012 at 07:02:21AM -0500, Stefan Berger wrote:
> On 02/21/2012 06:08 PM, Michael S. Tsirkin wrote:
> >On Tue, Feb 21, 2012 at 05:30:32PM -0500, Stefan Berger wrote:
> >>On 02/21/2012 02:58 PM, Michael S. Tsirkin wrote:
> >>>On Tue, Feb 21, 2012 at 10:05:26AM -0500, Stefan Berger wrote:
> >>>>On 02/21/2012 07:18 AM, Michael S. Tsirkin wrote:
> >>>>When the backend delivers the response it checks whether the
> >>>>interface is used in interrupt mode and raises the interrupt.
> >>>IMO it's the frontend that should send interrupts.
> >>>Yes it kind of works for isa anyway, but e.g. pci
> >>>needs to update configuration etc.
> >>>
> >>The code that causes the interrupt to be raised is in the frontend.
> >>The function doing that is invoked via callback from the backend.
> >>This should be ok?
> >Absolutely.
> >
> >>>>The
> >>>>backend enters the frontend code with a callback. In this function
> >>>>also a signal is sent that may wake up the main thread that, upon
> >>>>suspend, may be waiting for the last command to be processed and be
> >>>>sleeping on a condition variable.
> >>>>
> >>>>I now added a function to the backend interface that is invoked by
> >>>>the frontend to notify the backend of a TPM request. The backend
> >>>>code can then either notify a thread (passthrough and libtpms
> >>>>driver) or create a response right away and invoke that callback to
> >>>>the front-end to deliver the response (null driver). How frontend
> >>>>and backend handle notifications is isolated to the frontend and
> >>>>backend with some backends (libtpms, passthough) sharing the code
> >>>>for how the notification is done.
> >>>>
> >>>>Stefan
> >>>Right. And all the locking/threading can then be internal to the backend.
> >>>
> >>Do you really want to replace code like this in the frontend:
> >>
> >>qemu_mutex_lock(&s->state_lock)
> >>[...]
> >>qemu_mutex_unlock(&s->state_lock)
> >>
> >>
> >>with
> >>
> >>
> >>s->be_driver->ops->state_lock(s->be_driver)
> >>[...]
> >>s->be_driver->ops->state_unlock(s->be_driver))
> >>
> >>
> >>where the backend starts protecting frontend data structures ?
> >It's ugly I hope you can do something saner:
> >ops->send_command(....)
> >with command encapsulating the relevant info.
> >
> >>At the moment there are two backends that need threading: the
> >>libtpms and passthrough backends. Both will require locking of
> >>datastructures that belong to the frontend. Only the null driver
> >>doesn't need a thread and the main thread can call into the backend,
> >>create the response and call via callback into the frontend to
> >>deliver the repsonse. If structures are protected via mutxes then
> >>only the NULL driver (which we don't want anyway) may end up
> >>grabbing mutexes that really aren't necessary while the two other
> >>backends need them. I don't see the mitextes as problematic. The
> >>frontend at least protects its data structures for the callbacks and
> >>other API calls it offers and they simply are thread-safe.
> >>
> >>     Stefan
> >Worst case, you can take a qemu mutex. Is tpm very
> >performance-sensitive to make contention on that
> >lock a problem?
> >
> 
> Having instrumented the code with qemu_mutex_trylock() and a counter
> for failed attempts with subsequent qemu_mutex_lock() practically
> shows no lock contention at all for either polling or IRQ mode being
> used by the Linux driver.
> 
> IRQ mode: 4 failed lock attempts out of 1726208 locks -> 0.00%
> polling mode: 2 failed lock attempts out of 1545216 locks -> 0.00%
> 
> I used the libtpms based backend with and ran IMA and a test suite
> in the VM.
> 
> 
>     Stefan

so maybe you can just do qemu_mutex_lock_iothread similar
to what kvm does.

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]