qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
Date: Thu, 16 Jun 2016 18:54:34 +0100
User-agent: Mutt/1.6.1 (2016-04-27)

* Stefan Berger (address@hidden) wrote:
> On 06/16/2016 11:22 AM, Dr. David Alan Gilbert wrote:
> > * Stefan Berger (address@hidden) wrote:
> > > On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:
> > > > * Stefan Berger (address@hidden) wrote:
> > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > > > <snip>
> > > > 
> > > > > > So what was the multi-instance vTPM proxy driver patch set about?
> > > > > That's for containers.
> > > > Why have the two mechanisms? Can you explain how the multi-instance
> > > > proxy works; my brief reading when I saw your patch series seemed
> > > > to suggest it could be used instead of CUSE for the non-container case.
> > > The multi-instance vtpm proxy driver basically works through usage of an
> > > ioctl() on /dev/vtpmx that is used to spawn a new front- and backend pair.
> > > The front-end is a new /dev/tpm%d device that then can be moved into the
> > > container (mknod + device cgroup setup). The backend is an anonymous file
> > > descriptor that is to be passed to a TPM emulator for reading TPM requests
> > > coming in from that /dev/tpm%d and returning responses to. Since it is
> > > implemented as a kernel driver, we can hook it into the Linux Integrity
> > > Measurement Architecture (IMA) and have it be used by IMA in place of a
> > > hardware TPM driver. There's ongoing work in the area of namespacing 
> > > support
> > > for IMA to have an independent IMA instance per container so that this can
> > > be used.
> > > 
> > > A TPM does not only have a data channel (/dev/tpm%d) but also a control
> > > channel, which is primarily implemented in its hardware interface and is
> > > typically not fully accessible to user space. The vtpm proxy driver _only_
> > > supports the data channel through which it basically relays TPM commands 
> > > and
> > > responses from user space to the TPM emulator. The control channel is
> > > provided by the software emulator through an additional TCP or UnixIO 
> > > socket
> > > or in case of CUSE through ioctls. The control channel allows to reset the
> > > TPM when the container/VM is being reset or set the locality of a command 
> > > or
> > > retrieve the state of the vTPM (for suspend) and set the state of the vTPM
> > > (for resume) among several other things. The commands for the control
> > > channel are defined here:
> > > 
> > > https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h
> > > 
> > > For a container we would require that its management stack initializes and
> > > resets the vTPM when the container is rebooted. (These are typically
> > > operations that are done through pulses on the motherboard.)
> > > 
> > > In case of QEMU we would need to have more access to the control channel,
> > > which includes initialization and reset of the vTPM, getting and setting 
> > > its
> > > state for suspend/resume/migration, setting the locality of commands, 
> > > etc.,
> > > so that all low-level functionality is accessible to the emulator (QEMU).
> > > The proxy driver does not help with this but we should use the swtpm
> > > implementation that either has that CUSE interface with control channel
> > > (through ioctls) or provides UnixIO and TCP sockets for the control 
> > > channel.
> > OK, that makes sense; does the control interface need to be handled by QEMU
> > or by libvirt or both?
> 
> The control interface needs to be handled primarily by QEMU.
> 
> In case of the libvirt implementation I am running an external program
> swtpm_ioctl that uses the control channel to gracefully shut down any
> existing running TPM emulator whose device name happens to have the same
> name as the device of the TPM emulator that is to be created. So it cleans
> up before starting a new TPM emulator just to make sure that that new TPM
> instance can be started. Detail...
> 
> > Either way, I think you're saying that with your kernel interface + a UnixIO
> > socket you can avoid the CUSE stuff?
> 
> So in case of QEMU you don't need that new kernel device driver -- it's
> primarily meant for containers. For QEMU one would start the TPM emulator
> and make sure that QEMU has access to the data and control channels, which
> are now offered as
> 
> - CUSE interface with ioctl
> - TCP + TCP
> - UnixIO + TCP
> - TCP + UnioIO
> - UnixIO + UnixIO
> - file descriptors passed from invoker

OK, I'm trying to remember back; I'll admit to not having
liked using CUSE, but didn't using TCP/Unix/fd for the actual TPM
side require a lot of code to add a qemu interface that wasn't
ioctl?
Doesn't using the kernel driver give you the benefit of both worlds,
i.e. the non-control side in QEMU is unchanged.

Dave

>   Stefan
> 
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]