qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 00/13] tpm: Extend TPM with state migration s


From: Stefan Berger
Subject: Re: [Qemu-devel] [PATCH v3 00/13] tpm: Extend TPM with state migration support (not 2.11)
Date: Fri, 22 Dec 2017 14:18:32 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0

On 12/22/2017 12:52 PM, Marc-André Lureau wrote:
Hi

On Fri, Dec 22, 2017 at 6:47 PM, Stefan Berger
<address@hidden> wrote:
On 12/22/2017 11:13 AM, Marc-André Lureau wrote:
Hi

On Fri, Dec 22, 2017 at 4:59 PM, Stefan Berger
<address@hidden> wrote:
On 12/22/2017 07:49 AM, Marc-André Lureau wrote:
Hi

On Fri, Nov 10, 2017 at 3:11 PM, Stefan Berger
<address@hidden> wrote:
This set of patches implements support for migrating the state of the
external 'swtpm' TPM emulator as well as that of the emulated device
interfaces. I have primarily tested this with the TIS and TPM 1.2 so
far, but it also seems to work with TPM 2.

The TIS is simplified first by reducing the number of buffers and read
and write offsets into these buffers. Following the state machine of
the
TIS, a single buffer and r/w offset is enough for all localities since
only one locality can ever be active.

This series applies on top of my tpm-next branch.

One of the challenges that is addressed by this set of patches is the
fact
that the TPM emulator may be processing a command while the state
serialization of the devices is supposed to happen. A necessary first
step
has been implemented here that ensures that a response has been
received
from the exernal emulator and the bottom half function, which delivers
the
response and adjusts device registers (TIS or CRB), has been executed,
before the device's state is serialized.

A subsequent extension may need to address the live migration loop and
delay
the serialization of devices until the response from the external TPM
has
been received. Though the likelihood that someone executes a
long-lasting
TPM command while this is occurring is certainly rare.

      Stefan

Stefan Berger (13):
     tpm_tis: convert uint32_t to size_t
     tpm_tis: limit size of buffer from backend
     tpm_tis: remove TPMSizeBuffer usage
     tpm_tis: move buffers from localities into common location
     tpm_tis: merge read and write buffer into single buffer
     tpm_tis: move r/w_offsets to TPMState
     tpm_tis: merge r/w_offset into rw_offset
     tpm: Implement tpm_sized_buffer_reset
ok for the above (you could queue/pull-req this now)

     tpm: Introduce condition to notify waiters of completed command
     tpm: Introduce condition in TPM backend for notification
     tpm: implement tpm_backend_wait_cmd_completed
     tpm: extend TPM emulator with state migration support
     tpm_tis: extend TPM TIS with state migration support
Much of the complexity from this migration series comes with the
handling & synchronization of the IO thread.

I think having a seperate thread doesn't bring much today TPM thread.
it is a workaround for the chardev API being mostly synchronous. Am I
wrong? (yes, passthrough doesn't use chardev, but it should probably
use qio or chardev internally)

Other kind of devices using a seperate process suffer the same
problem. Just grep for qemu_chr_fe_write and look for the preceding
comment, it's a common flaw in qemu code base. Code use the
synchronous API, and sometime use non-blocking write
(hw/usb/redirect.c seems quite better!)

I would like to get rid of the seperate thread in TPM before we add
migration support. We should try to improve the chardev API to make it
easier to do non-blocking IO. This should considerably simplify the
above patches and benefit the rest of qemu (instead of having everyone
doing its own thing with seperate thread or other kind of continuation
state).

What do you think?

I am wondering whether it will help us to get rid of the
conditions/notifications, like patches 9-11 of this series. I doubt 12
and
13 will change. At some point device state is supposed to be written out
and
in case of the TPM we have to wait for the response to have come back
into
the backend. We won't start listening on the file descriptor for an
outstanding response, so I guess we will still wait for a notification in
that case as well. So I am not sure which parts are going to be
simpler...
Why would qemu need to wait for completion of emulator? Couldn't qemu
& emulator save the current state, including in-flights commands?
That's apparently what usbredir does.

How could we make sure whether the PCRExtend has internally completed the
extensions of the PCR but we haven't received the response yet and would
just send the same command again? In general, the TPM cannot be sent the
same commands twice due to (sevaral) commands (cryptographically) altering
the internal state of the TPM.

Why would it send the same command again? If the commands is still
in-flight on emulator side, ex:

- send PCRExtend cmd
- send SaveState cmd
- receive SaveState reply

Then the received state should be able to restore the emulator so that
PCRExtend reply is received after migration.

Wouldn't that work?

If the command has completed entirely, we can also just wait for the response to come back. Otherwise commands could be in the middle of processing and what would we do then? Freeze the CPU state of the process that's executing the swtpm? No, I think we need to wait for the response to come back and then retrieve the state blobs of the TPM.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]