qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM


From: Stefan Berger
Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
Date: Wed, 1 Mar 2017 10:58:10 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0

On 03/01/2017 10:24 AM, Marc-André Lureau wrote:
Hi

On Wed, Mar 1, 2017 at 6:50 PM Stefan Berger <address@hidden <mailto:address@hidden>> wrote:

    On 03/01/2017 09:17 AM, Marc-André Lureau wrote:
    Hi

    On Wed, Mar 1, 2017 at 5:26 PM Stefan Berger <address@hidden
    <mailto:address@hidden>> wrote:

        "Daniel P. Berrange" <address@hidden
        <mailto:address@hidden>> wrote on 03/01/2017 07:54:14
        AM:
        >

        > On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:
        > > On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
        > > > On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David
        Alan Gilbert
        wrote:
        > > > > * Stefan Berger (address@hidden
        <mailto:address@hidden>) wrote:
        > > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
        > > > > <snip>
        > > > >
        > > > > > > So what was the multi-instance vTPM proxy driver
        patch set
        about?
        > > > > > That's for containers.
        > > > > Why have the two mechanisms? Can you explain how the
        multi-instance
        > > > > proxy works; my brief reading when I saw your patch
        series seemed
        > > > > to suggest it could be used instead of CUSE for the
        non-container
        case.
        > > > One of the key things that was/is not appealing about
        this CUSE
        approach
        > > > is that it basically invents a new ioctl() mechanism
        for talking to
        > > > a TPM chardev. With in-kernel vTPM support, QEMU
        probably doesn't
        need
        > > > to have any changes at all - its existing driver for
        talking to TPM
        > >
        > > We still need the control channel with the vTPM to reset
        it upon VM
        reset,
        > > for getting and setting the state of the vTPM upon
        snapshot/suspend/resume,
        > > changing locality, etc.
        >
        > You ultimately need the same mechanisms if using in-kernel
        vTPM with
        > containers as containers can support
        snapshot/suspend/resume/etc too.

        The vTPM running on the backend side of the vTPM proxy driver is
        essentially the same as the CUSE TPM used for QEMU. I has the
        same control
        channel through sockets. So on that level we would have
        support for the
        operations but not integrated with anything that would
        support container
        migration.


    Ah that might explain why you added the socket control channel,
    but there is no user yet? (or some private product perhaps).
    Could you tell if control and data channels need to be
    synchronized in any ways?


    In the general case, synchronization would have to happen, yes. So
    a lock that is held while the TPM processes data would have to
    lock out control channel commands that operate on the TPM data.
    That may be missing. In case of QEMU being the client, not much
    concurrency would be expected there just by the way QEMU interacts
    with it.


Could the data channel be muxed in with the control channel? )that is only use one control socket)


You could run the data channel as part of the control channel or vice versa. I think the problem is that TCG hasn't defined anything in this area and two people in different rooms will come up with two different designs.




    A detail: A corner case is live-migration with the TPM emulation
    being busy processing a command, like creation of a key. In that
    case QEMU would keep on running and only start streaming device
    state to the recipient side after the TPM command processing
    finishes and has returned the result. QEMU wouldn't want to get
    stuck in a lock between data and control channel, so would have
    other means of determining when the backend processing is done.



    Getting back to the original out-of-process design: qemu links
    with many libraries already, perhaps a less controversial
    approach would be to have a linked in solution before proposing
    out-of-process? This would be easier to deal with for

    I had already proposed a linked-in version before I went to the
    out-of-process design. Anthony's concerns back then were related
    to the code not being trusted and a segfault in the code could
    bring down all of QEMU. That we have test suites running over it
    didn't work as an argument. Some of the test suite are private,
    though.


I think Anthony argument is valid for anything running in qemu :) So I don't see why TPM would be an exception now.

Could you say how much is covered by the public test suite?

I don't know anything in terms of percentage of code coverage. But I think in terms of coverage of commands of a TPM 1.2 we were probably >95%. Now there's also TPM 2 and for that I don't know.


About tests, is there any test for qemu TIS?

For the TIS I had some very limited tests in SeaBIOS, which of course are not upstreamed. Though the primary goal there was to test live migration while doing PCR Extends.




    management layers etc. This wouldn't be the most robust solution,
    but could get us somewhere at least for easier testing and
    development.

    Hm. In terms of external process it's basically 'there', so I
    don't related to the 'easier testing and development.' The various
    versions with QEMU + CUSE TPM driver patches applied are here.

    https://github.com/stefanberger/qemu-tpm/tree/v2.8.0+tpm



Some people may want to use simulated TPM with qemu without the need for security, just to do development/testing.


For that they have a solution with the above tree and the swtpm and libtpms projects.

Dealing with external processes makes also qemu development and testing more difficult.

Well, internal didn't fly previously.


Changing the IPC interface is more complicated than having linked in solution. Testing is easier if you can just start/kill one qemu process.

I can't say if it's really needed to ease progress, but at least it would avoid that CUSE/IPC discussion for now.

    I have an older version of libvirt that has the necessary patches
    applied to start QEMU with the external TPM. There's also
    virt-manager support.


Ok, I think it would be worth to list all the up to date trees in http://www.qemu-project.org/Features/TPM (btw, that page is 5y old, would be nice if you could refresh it, I bet some changed happened)

    If CUSE is the wrong interface, then there's a discussion about
    this here. Alternatively UnixIO for data and control channel could
    be used.

    https://github.com/stefanberger/swtpm/issues/4


If there is no strong argument for CUSE, I would go without it

(I'd also suggest a similar approach to vhost-user backend I proposed in http://lists.nongnu.org/archive/html/qemu-devel/2016-06/msg01014.html,
it spawns a backend and pass an extra socketpair fd to it)




reply via email to

[Prev in Thread] Current Thread [Next in Thread]