qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH V14 5/7] Add a TPM Passthrough backend driver im


From: Stefan Berger
Subject: Re: [Qemu-devel] [PATCH V14 5/7] Add a TPM Passthrough backend driver implementation
Date: Mon, 20 Feb 2012 16:12:56 -0500
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.23) Gecko/20110928 Fedora/3.1.15-1.fc14 Lightning/1.0b3pre Thunderbird/3.1.15

On 02/20/2012 03:01 PM, Michael S. Tsirkin wrote:
On Wed, Dec 14, 2011 at 08:43:20AM -0500, Stefan Berger wrote:
+struct TPMPassthruState {
+    QemuThread thread;
+    bool thread_terminate;
+    bool thread_running;
+
+    TPMPassthruThreadParams tpm_thread_params;
+
+    char tpm_dev[64];
why not strdup and avoid length limits?



Fixed.


+
+            /* in case we were to slow and missed the signal, the


too slow

Fixed.

+
+        qemu_mutex_lock(&tpm_pt->tpm_thread_params.tpm_state->state_lock);
+        qemu_cond_signal(&tpm_pt->tpm_thread_params.tpm_state->to_tpm_cond);
+        qemu_mutex_unlock(&tpm_pt->tpm_thread_params.tpm_state->state_lock);
why lock?


Before the other thread calls the function to wait for the condition it has to grab the lock. Since the signal is not sticky, it could happen that it had grabbed the lock but wasn't waiting for the condition, yet, thus missing the signal yet still waiting for it soon after and then it may end up waiting forever. The lock prevents this. Check example here:

https://computing.llnl.gov/tutorials/pthreads/#ConVarSignal

+static size_t tpm_passthrough_realloc_buffer(TPMSizedBuffer *sb)
+{
+    size_t wanted_size = 4096; /* Linux tpm.c buffer size */
what does the comment mean?


It basically means that it allocates the same size of buffer as the Linux driver does. If the TPM was to send a response (there are no requests of even close that size) then the Linux driver can accommodate the size of the response, we can also accomodate it.


+ * (error response is fine) within one second.
+ */
+static int tpm_passthrough_test_tpmdev(int fd)
+{
+    struct tpm_req_hdr req = {
+        .tag = cpu_to_be16(TPM_TAG_RQU_COMMAND),
+        .len = cpu_to_be32(sizeof(req)),
+        .ordinal = cpu_to_be32(TPM_ORD_GetTicks),
+    };
+    struct tpm_resp_hdr *resp;
+    fd_set readfds;
+    int n;
+    struct timeval tv = {
+        .tv_sec = 1,
+        .tv_usec = 0,
+    };
+    unsigned char buf[1024];
+
+    n = write(fd,&req, sizeof(req));
+    if (n<  0) {
+        return errno;
+    }
+    if (n != sizeof(req)) {
+        return EFAULT;
+    }
+
+    FD_ZERO(&readfds);
+    FD_SET(fd,&readfds);
+
+    /* wait for a second */
+    n = select(fd + 1,&readfds, NULL, NULL,&tv);
+    if (n != 1) {
+        return errno;
+    }
+
+    n = read(fd,&buf, sizeof(buf));
so why read the whole buf? hoiw do we know 1024 is enough?

The size of the response for this particular command can easily be read into 1024 bytes, even less than 50 would be enough. I could try to come up with the exact number, but that then probably causes more questions... 1024 bytes is ample space.

read the heade then discard the rest
until empty.


I don't follow... ?

FYI: The TPM only lets you read the buffer in one go. If you don't bring a buffer with enough size when doing the read from /dev/tpm0, the rest of the result is gone. No 2nd read on the same response buffer.

Even though /dev/tpm0 blocks in the write(), the above select() is there to prepare for usage with sockets. It doesn't hurt at the moment, but it also doesn't do the expected wait for 1 second...


   Stefan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]