qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
Date: Wed, 15 Jun 2016 20:30:20 +0100
User-agent: Mutt/1.6.1 (2016-04-27)

* Stefan Berger (address@hidden) wrote:
> On 05/31/2016 09:58 PM, Xu, Quan wrote:
> > On Wednesday, June 01, 2016 2:59 AM, BICKFORD, JEFFREY E <address@hidden> 
> > wrote:
> > > > * Daniel P. Berrange (address@hidden) wrote:
> > > > > On Wed, Jan 20, 2016 at 10:54:47AM -0500, Stefan Berger wrote:
> > > > > > On 01/20/2016 10:46 AM, Daniel P. Berrange wrote:
> > > > > > > On Wed, Jan 20, 2016 at 10:31:56AM -0500, Stefan Berger wrote:
> > > > > > > > "Daniel P. Berrange" <address@hidden> wrote on 01/20/2016
> > > > > > > > 10:00:41
> > > > > > > > AM:
> > > > > > > > 
> > > > > > > > 
> > > > > > > > > process at all - it would make sense if there was a single
> > > > > > > > > swtpm_cuse shared across all QEMU's, but if there's one per
> > > > > > > > > QEMU device, it feels like it'd be much simpler to just have
> > > > > > > > > the functionality linked in QEMU.  That avoids the problem
> > > > > > > > I tried having it linked in QEMU before. It was basically 
> > > > > > > > rejected.
> > > > > > > I remember an impl you did many years(?) ago now, but don't
> > > > > > > recall the results of the discussion. Can you elaborate on why it
> > > > > > > was rejected as an approach ? It just doesn't make much sense to
> > > > > > > me to have to create an external daemon, a CUSE device and comms
> > > > > > > protocol, simply to be able to read/write a plain file containing
> > > > > > > the TPM state. Its massive over engineering IMHO and adding way
> > > > > > > more complexity and thus scope for failure
> > > > > > The TPM 1.2 implementation adds 10s of thousands of lines of code.
> > > > > > The TPM 2 implementation is in the same range. The concern was
> > > > > > having this code right in the QEMU address space. It's big, it can
> > > > > > have bugs, so we don't want it to harm QEMU. So we now put this
> > > > > > into an external process implemented by the swtpm project that
> > > > > > builds on libtpms which provides TPM 1.2 functionality (to be
> > > > > > extended with TPM 2). We cannot call APIs of libtpms directly
> > > > > > anymore, so we need a control channel, which is implemented through
> > > ioctls on the CUSE device.
> > > > > Ok, the security separation concern does make some sense. The use of
> > > > > CUSE still seems fairly questionable to me. CUSE makes sense if you
> > > > > want to provide a drop-in replacement for the kernel TPM device
> > > > > driver, which would avoid ned for a new QEMU backend. If you're not
> > > > > emulating an existing kernel driver ABI though, CUSE + ioctl is
> > > > > feels like a really awful RPC transport between 2 userspace processes.
> > > > While I don't really like CUSE; I can see some of the reasoning here.
> > > > By providing the existing TPM ioctl interface I think it means you can
> > > > use existing host-side TPM tools to initialise/query the soft-tpm, and
> > > > those should be independent of the soft-tpm implementation.
> > > > As for the extra interfaces you need because it's a soft-tpm to set it
> > > > up, once you've already got that ioctl interface as above, then it
> > > > seems to make sense to extend that to add the extra interfaces needed.
> > > > The only thing you have to watch for there are that the extra
> > > > interfaces don't clash with any future kernel ioctl extensions, and
> > > > that the interface defined is generic enough for different soft-tpm
> > > implementations.
> > > 
> > > > Dave
> > > > Dr. David Alan Gilbert / address@hidden / Manchester, UK
> > > 
> > > Over the past several months, AT&T Security Research has been testing the
> > > Virtual TPM software from IBM on the Power (ppc64) platform.
> > What about x86 platform?
> > 
> > > Based on our
> > > testing results, the vTPM software works well and as expected. Support for
> > > libvirt and the CUSE TPM allows us to create VMs with the vTPM 
> > > functionality
> > > and was tested in a full-fledged OpenStack environment.
> > > 
> > Cool..
> > 
> > > We believe the vTPM functionality will improve various aspects of VM 
> > > security
> > > in our enterprise-grade cloud environment. AT&T would like to see these
> > > patches accepted into the QEMU community as the default-standard build so
> > > this technology can be easily adopted in various open source cloud
> > > deployments.
> > Stefan: could you update status about this patch set? I'd really appreciate 
> > your patch..
> 
> What do you mean by 'update status'. It's pretty much still the same as
> before.
> 
> https://github.com/stefanberger/qemu-tpm/tree/v2.6.0+tpm
> 
> 
> The implementation of the swtpm that I use for connecting QEMU to now has
> more interface choices. There's the existing CUSE + ioctl for data and
> control channel or any combination of TCP and Unix sockets for data and
> control channel. The libvirt based management stack I built on top of QEMU
> with vTPM assumes QEMU using the CUSE interface.

So what was the multi-instance vTPM proxy driver patch set about?

Dave

> 
>     Stefan
> 
> 
> > 
> > -Quan
> > 
> 
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]