qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH V14 2/7] Add TPM (frontend) hardware interface (


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH V14 2/7] Add TPM (frontend) hardware interface (TPM TIS) to Qemu
Date: Tue, 21 Feb 2012 14:18:11 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Tue, Feb 21, 2012 at 06:19:26AM -0500, Stefan Berger wrote:
> On 02/20/2012 10:18 PM, Michael S. Tsirkin wrote:
> >On Mon, Feb 20, 2012 at 07:43:05PM -0500, Stefan Berger wrote:
> >>On 02/20/2012 05:02 PM, Michael S. Tsirkin wrote:
> >>>On Wed, Dec 14, 2011 at 08:43:17AM -0500, Stefan Berger wrote:
> >>>>+/*
> >>>>+ * Send a TPM request.
> >>>>+ * Call this with the state_lock held so we can sync with the receive
> >>>>+ * callback.
> >>>>+ */
> >>>>+static void tpm_tis_tpm_send(TPMState *s, uint8_t locty)
> >>>>+{
> >>>>+    TPMTISState *tis =&s->s.tis;
> >>>>+
> >>>>+    tpm_tis_show_buffer(&tis->loc[locty].w_buffer, "tpm_tis: To TPM");
> >>>>+
> >>>>+    s->command_locty = locty;
> >>>>+    s->cmd_locty     =&tis->loc[locty];
> >>>>+
> >>>>+    /* w_offset serves as length indicator for length of data;
> >>>>+       it's reset when the response comes back */
> >>>>+    tis->loc[locty].status = TPM_TIS_STATUS_EXECUTION;
> >>>>+    tis->loc[locty].sts&= ~TPM_TIS_STS_EXPECT;
> >>>>+
> >>>>+    s->to_tpm_execute = true;
> >>>>+    qemu_cond_signal(&s->to_tpm_cond);
> >>>>+}
> >>>What happens IIUC is that frondend sets to_tpm_execute
> >>>and signals a condition, and backend clears it
> >>>and waits on a condition.
> >>>
> >>>So how about moving all the signalling
> >>>and locking out to backend, and have frontend
> >>>invoke a callback to signal it?
> >>>
> >>>The whole threading thing then becomes a work-around
> >>>for a backend that does not support select,
> >>>instead of spilling out into frontend?
> >>>
> >>How do I get the lock calls (qemu_mutex_lock(&s->state_lock)) out of
> >>the frontend? Do you want me to add callbacks to the backend
> >>interface for locking (s->be_driver->ops->state_lock(s)) and one for
> >>unlocking (s->be_driver->ops->state_unlock(tpm_be)) of the state
> >>that really belongs to the front-end (state is 's') and invoke it as
> >>shown in parenthesis and still keep s->state_lock around? Ideally
> >>the locks would end up being 'nop's' if select() was available, but
> >>in the end all backend will need to support that lock.
> >>
> >>[The lock protects the common structure so that the thread in the
> >>backend can deliver the response to a request while the OS for
> >>example polls the hardware interface for its current state.]
> >>
> >>
> >>    Stefan
> >
> >Well, this is just an idea, please do not take this as
> >a request or anything like that. Maybe it is a dumb one.
> >
> >Maybe something like what you describe.
> 
> I am starting to wonder what we're trying to achieve? We have a
> producer-consumer problem here with different threads. Both threads
> need to have some locking constructs along with the signalling
> (condition). The backend needs to be written in a certain way to
> work with the frontend, locking and signalling is a part of this. So
> I don't see it makes much sense to move all that code around,
> especially since there is only one backend right now. Maybe
> something really great can be done once there is a 2nd backend.

There are three reasons I think where I think code
could be improved:

1. Your backend does not expose a reentrant asynchronous API,
   but another backend might.
   So it might be a better idea to hide this detail, and build a
   reentrant asynchronous API on top of what the OS supplies.
2. Your backend looks into the frontend data structures.
   This will make it impossible to implement another frontend.
3. I personally find it very hard to follow inter-thread
   communication based on shared memory and conditions
   if it is spread around between 2 different patches
   and different files. This can alternatively be addressed
   by documenting the synchronization/locking strategy.


> >Alternatively, I imagined that you can pass a copy
> >or pointer of the necessary state to the backend,
> >which queues the command and wakes the worker.
> >In the reverse direction, backend queues a response
> >and when OS polls you dequeue it and update state.
> >
> 
> The OS doesn't necessarily need to poll. It is just one mode of
> operation of the OS, the other being interrupt-driven where the
> backend raises the interrupt once it has delivered the response to
> the frontend.
> 
> 
>    Stefan

So you will also need to signal the frontend when it
must interrupt the guest. This is not a problem,
for example you can use a qemu_eventfd object for this.

> 
> >Can this work?



reply via email to

[Prev in Thread] Current Thread [Next in Thread]