qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM


From: Stefan Berger
Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
Date: Mon, 13 Jun 2016 06:56:12 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.7.1

On 05/31/2016 03:10 PM, Dr. David Alan Gilbert wrote:
* BICKFORD, JEFFREY E (address@hidden) wrote:
* Daniel P. Berrange (address@hidden) wrote:
On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
"Daniel P. Berrange" <address@hidden> wrote on 01/20/2016 10:00:41
AM:


process at all - it would make sense if there was a single
swtpm_cuse shared across all QEMU's, but if there's one per
QEMU device, it feels like it'd be much simpler to just have
the functionality linked in QEMU.  That avoids the problem
I tried having it linked in QEMU before. It was basically rejected.
I remember an impl you did many years(?) ago now, but don't recall
the results of the discussion. Can you elaborate on why it was
rejected as an approach ? It just doesn't make much sense to me
to have to create an external daemon, a CUSE device and comms
protocol, simply to be able to read/write a plain file containing
the TPM state. Its massive over engineering IMHO and adding way
more complexity and thus scope for failure
The TPM 1.2 implementation adds 10s of thousands of lines of code. The TPM 2
implementation is in the same range. The concern was having this code right
in the QEMU address space. It's big, it can have bugs, so we don't want it
to harm QEMU. So we now put this into an external process implemented by the
swtpm project that builds on libtpms which provides TPM 1.2 functionality
(to be extended with TPM 2). We cannot call APIs of libtpms directly
anymore, so we need a control channel, which is implemented through ioctls
on the CUSE device.
Ok, the security separation concern does make some sense. The use of CUSE
still seems fairly questionable to me. CUSE makes sense if you want to
provide a drop-in replacement for the kernel TPM device driver, which
would avoid ned for a new QEMU backend. If you're not emulating an existing
kernel driver ABI though, CUSE + ioctl is feels like a really awful RPC
transport between 2 userspace processes.
While I don't really like CUSE; I can see some of the reasoning here.
By providing the existing TPM ioctl interface I think it means you can use
existing host-side TPM tools to initialise/query the soft-tpm, and those
should be independent of the soft-tpm implementation.
As for the extra interfaces you need because it's a soft-tpm to set it up,
once you've already got that ioctl interface as above, then it seems to make
sense to extend that to add the extra interfaces needed.  The only thing
you have to watch for there are that the extra interfaces don't clash
with any future kernel ioctl extensions, and that the interface defined
is generic enough for different soft-tpm implementations.
Dave
Dr. David Alan Gilbert / address@hidden / Manchester, UK

Over the past several months, AT&T Security Research has been testing the 
Virtual TPM software from IBM on the Power (ppc64) platform. Based on our testing 
results, the vTPM software works well and as expected. Support for libvirt and the 
CUSE TPM allows us to create VMs with the vTPM functionality and was tested in a 
full-fledged OpenStack environment.
We believe the vTPM functionality will improve various aspects of VM security in our enterprise-grade cloud environment. AT&T would like to see these patches accepted into the QEMU community as the default-standard build so this technology can be easily adopted in various open source cloud deployments.
Interesting; however, I see Stefan has been contributing other kernel
patches that create a different vTPM setup without the use of CUSE;
if that's the case then I guess that's the preferable solution.

That solution is for Linux containers. It doesn't have the control channel we need for virtual machines where for example a reset it sent to the vTPM by QEMU when rebooting the VM. Instead we assume that the container management stack would reset the vTPM upon container restart.


Jeffrey: Can you detail a bit more about your setup, and how
you're maanging the life cycle of the vTPM data?

Dave

Regards,
Jeffrey Bickford
AT&T Security Research Center
address@hidden
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK





reply via email to

[Prev in Thread] Current Thread [Next in Thread]