qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM


From: Stefan Berger
Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
Date: Wed, 1 Mar 2017 10:40:13 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0

On 03/01/2017 10:18 AM, Daniel P. Berrange wrote:
On Wed, Mar 01, 2017 at 08:25:43AM -0500, Stefan Berger wrote:
"Daniel P. Berrange" <address@hidden> wrote on 03/01/2017 07:54:14
AM:

From: "Daniel P. Berrange" <address@hidden>
To: Stefan Berger <address@hidden>
Cc: "Dr. David Alan Gilbert" <address@hidden>, Stefan Berger/
Watson/address@hidden, "address@hidden" <address@hidden>, "qemu-
address@hidden" <address@hidden>, "SERBAN, CRISTINA"
<address@hidden>, "Xu, Quan" <address@hidden>,
"address@hidden" <address@hidden>,
"address@hidden" <address@hidden>, "SHIH, CHING C"
<address@hidden>
Date: 03/01/2017 08:03 AM
Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE
TPM
On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:
On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert
wrote:
* Stefan Berger (address@hidden) wrote:
On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
<snip>

So what was the multi-instance vTPM proxy driver patch set
about?
That's for containers.
Why have the two mechanisms? Can you explain how the
multi-instance
proxy works; my brief reading when I saw your patch series seemed
to suggest it could be used instead of CUSE for the non-container
case.
One of the key things that was/is not appealing about this CUSE
approach
is that it basically invents a new ioctl() mechanism for talking to
a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't
need
to have any changes at all - its existing driver for talking to TPM
We still need the control channel with the vTPM to reset it upon VM
reset,
for getting and setting the state of the vTPM upon
snapshot/suspend/resume,
changing locality, etc.
You ultimately need the same mechanisms if using in-kernel vTPM with
containers as containers can support snapshot/suspend/resume/etc too.
The vTPM running on the backend side of the vTPM proxy driver is
essentially the same as the CUSE TPM used for QEMU. I has the same control
channel through sockets. So on that level we would have support for the
operations but not integrated with anything that would support container
migration.
This goes back to the question Dave mentions above ? Ignoring the control
channel aspect temporarily, can the CUSE TPM support the exact same ioctl
interface as the existing kernel TPM device ? It feels like this should
be possible, and if so, then this virtal TPM feature can be considered to
have two separate pieces.

The existing kernel device has not ioctl interface. If it had one, it wouldn't have the same since the control channel implemented on the ioctl interface is related to low level commands such as resetting the device when the platform resets, etc.


First enabling basic CUSE TPM device support would not require QEMU changes,
as we could just use the existing tpm-passthrough driver against the CUSE
device, albeit with the limitations around migration, snapshot etc.

... and device reset upon VM reset. You want to have at least that since otherwise the PCRs will not be in the correct state once the firmware with TPM support startes extending them. They need to be reset and the only way to do that is through some control channel command.



Second we could consider the question of supporting a control channel as
a separate topic. IIUC, QEMU essentially needs a way to trigger various
operations in the underlying TPM implementation, when certain lifecycle
operations are performed on the VM. I could see this being done as a
simple network protocol over a UNIX socket. So, you could then add a
new 'chardev' property to the tpm-passthrough device, which gives the
ID of a character device that provides the control channel.

Why would that other control channel need to be a device rather than an ioctl on the device? Or maybe entirely access the emulated TPM through UnixIO ?



This way QEMU does not need to have any special code to deal with CUSE
directly. QEMU could be used with a real TPM device, a vTPM device or
a CUSE TPM device, with the same driver. With both the vTPM and the
CUSE TPM device, QEMU would have the ability to use a out of band
control channel when migration/snapshot/etc take place.

This cleanly isolates QEMU from the particular design & implementation
that is currently used by the current swtpm code.

Someone needs to define the control channel commands.My definition is here:

https://github.com/stefanberger/qemu-tpm/commit/27d6cd856d5a14061955df7a93ee490697a7a174#diff-5cc0e46d3ec33a3f4262db773c193dfe


This won't go away even if we changed the transport for the commands. ioctl's seem to be one way of achieving this with a character device. The socket based control channels of 'swtpm' use the same commands.

   Stefan


Regards,
Daniel





reply via email to

[Prev in Thread] Current Thread [Next in Thread]