qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM


From: Stefan Berger
Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
Date: Wed, 1 Mar 2017 07:32:52 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0

On 02/28/2017 01:31 PM, Marc-André Lureau wrote:
Hi

On Fri, Jun 17, 2016 at 1:29 AM Stefan Berger <address@hidden <mailto:address@hidden>> wrote:

    On 06/16/2016 03:24 PM, Dr. David Alan Gilbert wrote:
    > * Stefan Berger (address@hidden
    <mailto:address@hidden>) wrote:
    >> On 06/16/2016 01:54 PM, Dr. David Alan Gilbert wrote:
    >>> * Stefan Berger (address@hidden
    <mailto:address@hidden>) wrote:
    >>>> On 06/16/2016 11:22 AM, Dr. David Alan Gilbert wrote:
    >>>>> * Stefan Berger (address@hidden
    <mailto:address@hidden>) wrote:
    >>>>>> On 06/16/2016 04:05 AM, Dr. David Alan Gilbert wrote:
    >>>>>>> * Stefan Berger (address@hidden
    <mailto:address@hidden>) wrote:
    >>>>>>>> On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
    >>>>>>> <snip>
    >>>>>>>
    >>>>>>>>> So what was the multi-instance vTPM proxy driver patch
    set about?
    >>>>>>>> That's for containers.
    >>>>>>> Why have the two mechanisms? Can you explain how the
    multi-instance
    >>>>>>> proxy works; my brief reading when I saw your patch series
    seemed
    >>>>>>> to suggest it could be used instead of CUSE for the
    non-container case.
    >>>>>> The multi-instance vtpm proxy driver basically works
    through usage of an
    >>>>>> ioctl() on /dev/vtpmx that is used to spawn a new front-
    and backend pair.
    >>>>>> The front-end is a new /dev/tpm%d device that then can be
    moved into the
    >>>>>> container (mknod + device cgroup setup). The backend is an
    anonymous file
    >>>>>> descriptor that is to be passed to a TPM emulator for
    reading TPM requests
    >>>>>> coming in from that /dev/tpm%d and returning responses to.
    Since it is
    >>>>>> implemented as a kernel driver, we can hook it into the
    Linux Integrity
    >>>>>> Measurement Architecture (IMA) and have it be used by IMA
    in place of a
    >>>>>> hardware TPM driver. There's ongoing work in the area of
    namespacing support
    >>>>>> for IMA to have an independent IMA instance per container
    so that this can
    >>>>>> be used.
    >>>>>>
    >>>>>> A TPM does not only have a data channel (/dev/tpm%d) but
    also a control
    >>>>>> channel, which is primarily implemented in its hardware
    interface and is
    >>>>>> typically not fully accessible to user space. The vtpm
    proxy driver _only_
    >>>>>> supports the data channel through which it basically relays
    TPM commands and
    >>>>>> responses from user space to the TPM emulator. The control
    channel is
    >>>>>> provided by the software emulator through an additional TCP
    or UnixIO socket
    >>>>>> or in case of CUSE through ioctls. The control channel
    allows to reset the
    >>>>>> TPM when the container/VM is being reset or set the
    locality of a command or
    >>>>>> retrieve the state of the vTPM (for suspend) and set the
    state of the vTPM
    >>>>>> (for resume) among several other things. The commands for
    the control
    >>>>>> channel are defined here:
    >>>>>>
    >>>>>>
    https://github.com/stefanberger/swtpm/blob/master/include/swtpm/tpm_ioctl.h
    >>>>>>
    >>>>>> For a container we would require that its management stack
    initializes and
    >>>>>> resets the vTPM when the container is rebooted. (These are
    typically
    >>>>>> operations that are done through pulses on the motherboard.)
    >>>>>>
    >>>>>> In case of QEMU we would need to have more access to the
    control channel,
    >>>>>> which includes initialization and reset of the vTPM,
    getting and setting its
    >>>>>> state for suspend/resume/migration, setting the locality of
    commands, etc.,
    >>>>>> so that all low-level functionality is accessible to the
    emulator (QEMU).
    >>>>>> The proxy driver does not help with this but we should use
    the swtpm
    >>>>>> implementation that either has that CUSE interface with
    control channel
    >>>>>> (through ioctls) or provides UnixIO and TCP sockets for the
    control channel.
    >>>>> OK, that makes sense; does the control interface need to be
    handled by QEMU
    >>>>> or by libvirt or both?
    >>>> The control interface needs to be handled primarily by QEMU.
    >>>>
    >>>> In case of the libvirt implementation I am running an
    external program
    >>>> swtpm_ioctl that uses the control channel to gracefully shut
    down any
    >>>> existing running TPM emulator whose device name happens to
    have the same
    >>>> name as the device of the TPM emulator that is to be created.
    So it cleans
    >>>> up before starting a new TPM emulator just to make sure that
    that new TPM
    >>>> instance can be started. Detail...
    >>>>
    >>>>> Either way, I think you're saying that with your kernel
    interface + a UnixIO
    >>>>> socket you can avoid the CUSE stuff?
    >>>> So in case of QEMU you don't need that new kernel device
    driver -- it's
    >>>> primarily meant for containers. For QEMU one would start the
    TPM emulator
    >>>> and make sure that QEMU has access to the data and control
    channels, which
    >>>> are now offered as
    >>>>
    >>>> - CUSE interface with ioctl
    >>>> - TCP + TCP
    >>>> - UnixIO + TCP
    >>>> - TCP + UnioIO
    >>>> - UnixIO + UnixIO
    >>>> - file descriptors passed from invoker
    >>> OK, I'm trying to remember back; I'll admit to not having
    >>> liked using CUSE, but didn't using TCP/Unix/fd for the actual TPM
    >>> side require a lot of code to add a qemu interface that wasn't
    >>> ioctl?
    >> Adding these additional interface to the TPM was a bigger
    effort, yes.
    > Right, so that code isn't in upstream qemu is it?

    I was talking about the TPM emulator side that has been extended like
    this, not QEMU.


Out of curiosity, did you do it (adding socket/fd channel) for qemu or for other reasons?

    >
    >>> Doesn't using the kernel driver give you the benefit of both
    worlds,
    >>> i.e. the non-control side in QEMU is unchanged.
    >> Yes. I am not sure what you are asking, though. A control
    channel is
    >> necessary no matter what. The kernel driver talks to
    /dev/vtpm-<VM uuid> via
    >> a file descriptor and uses commands sent through ioctl for the
    control
    >> channel. Whether QEMU now uses an fd that is a UnixIO or TCP
    socket to send
    >> the commands to the TPM or an fd that uses CUSE, doesn't matter
    much on the
    >> side of QEMU. The control channel may be a bit different when
    using ioctl
    >> versus an fd (for UnixIO or TCP) or ioctl. I am not sure why we
    would send
    >> commands through that vTPM proxy driver in case of QEMU rather
    than talking
    >> to the TPM emulator directly.
    > Right, so what I'm thinking is:
    >     a) QEMU talks to /dev/vtpm-whatever for the normal TPM stuff
    >        no/little code is needed to be added to qemu upstream for
    that

    If we talk to /dev/vtpm-whatever, then in my book we would talk to a
    CUSE TPM device. We have compatibility for that via fd passing
    from libvirt.


 /dev/vtpmx created devices are not CUSE devices, are they?

Could you explain why containers use the TPM proxy driver to create sw TPM, and not CUSE? Perhaps that will clear some aspects.. I imagine that the kernel can provide some data from the TPM proxy driver, via /sys, or even use some functions (random etc)? A CUSE driver is opaque to the host kernel, right?

The TPM proxy driver hooks into the existing Linux TPM drivercore and with that makes it available for other kernel services, such as trusted and encrytped keys and possibly a namespace IMA where the container would run its own instance of IMA, which then can extend the PCRs of the emulated TPM (vTPM). For QEMU it's sufficient to make an emulated TPM available.



I understand simulated hw TPM needs the additional control channel (the iostl stuff), and so they can't use the TPM proxy, as it wouldn't give you that extra channel. But containers could eventually use CUSE created devices (if they didn't need the extra /sys or other interface), right?

For containers I think we would want to make more kernel services available to each container and for that we need a driver that hooks itself into the core TPM code and makes a 'chip' available.

http://lxr.free-electrons.com/source/drivers/char/tpm/tpm-chip.c#L88


    Stefan



    >     b) Then you talk to the control side via an fd/socket
    >        you need to add your existing code for that.

    Not sure what /dev/vtpm-whatever is. If you mean the vtpm proxy driver
    by it then I don't understand why we would need that dependency along
    with the complication of how the setup for this particular device
    needs
    to be done (run ioctl on /dev/vtpmx to get a front end device and
    backend device file descriptor which then has to be passed to the
    swtpm
    to read from and write to).


I think we would like to see it as simple as containers, but they require different level of operations. If all of emulation would be in qemu there would be no need for control channel, so the control interface depends on what qemu and the tpm emulation process do. None of it required for swtpm & containers, but hw emulation needs more. I

t looks like TPM kernel interface is only data read/write, the CUSE IOCTLs are only for control IPC. If so then I think it's simpler, and more portable, to go with a pure socket/fd based solution, since CUSE in this qemu case doesn't bring much benefits afaict.

Btw, is there a need to synchronize data & control channel? (asking because it's not obvious when you say you can have both channels using different transport)


    >
    > So that doesn't depend on CUSE, it doesn't depend on your particular

    If it doesn't depend on CUSE, it depends on a rather novel device
    driver
    that doesn't need to be used in the QEMU case.


    > vTPM implementation (except for the control socket data, but then
    > hopefully that's pretty abstract); all good?
    Not sure I followed you above.


Hopefully I dind't add more confusion :)
Thanks

        Stefan

    >
    > Dave
    >
    >>    Stefan
    >>
    >>> Dave
    >>>
    >>>>     Stefan
    >>>>
    >>> --
    >>> Dr. David Alan Gilbert / address@hidden
    <mailto:address@hidden> / Manchester, UK
    >>>
    > --
    > Dr. David Alan Gilbert / address@hidden
    <mailto:address@hidden> / Manchester, UK
    >


--
Marc-André Lureau




reply via email to

[Prev in Thread] Current Thread [Next in Thread]