qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH V8 03/14] Add persistent state handling to TPM T


From: Paul Moore
Subject: Re: [Qemu-devel] [PATCH V8 03/14] Add persistent state handling to TPM TIS frontend driver
Date: Fri, 09 Sep 2011 17:13:30 -0400
User-agent: KMail/4.7.0 (Linux/2.6.39-gentoo-r4; KDE/4.7.0; x86_64; ; )

On Wednesday, August 31, 2011 10:35:54 AM Stefan Berger wrote:
> Index: qemu-git/hw/tpm_tis.c
> ===================================================================
> --- qemu-git.orig/hw/tpm_tis.c
> +++ qemu-git/hw/tpm_tis.c
> @@ -6,6 +6,8 @@
>   * Author: Stefan Berger <address@hidden>
>   *         David Safford <address@hidden>
>   *
> + * Xen 4 support: Andrease Niederl <address@hidden>
> + *
>   * This program is free software; you can redistribute it and/or
>   * modify it under the terms of the GNU General Public License as
>   * published by the Free Software Foundation, version 2 of the
> @@ -839,3 +841,167 @@ static int tis_init(ISADevice *dev)
>   err_exit:
>      return -1;
>  }
> +
> +/* persistent state handling */
> +
> +static void tis_pre_save(void *opaque)
> +{
> +    TPMState *s = opaque;
> +    uint8_t locty = s->active_locty;

Is it safe to read s->active_locty without the state_lock?  I'm not sure at 
this point but I saw it being protected by the lock elsewhere ...

If the state_lock does not protect all of the structure, it might be nice to 
add some comments in the structure declaration explaining what fields are 
protected by the state_lock and which are not.

> +    qemu_mutex_lock(&s->state_lock);
> +
> +    /* wait for outstanding requests to complete */
> +    if (IS_VALID_LOCTY(locty) && s->loc[locty].state == STATE_EXECUTION) {
> +        if (!s->be_driver->ops->job_for_main_thread) {
> +            qemu_cond_wait(&s->from_tpm_cond, &s->state_lock);
> +        } else {
> +            while (s->loc[locty].state == STATE_EXECUTION) {
> +                qemu_mutex_unlock(&s->state_lock);
> +
> +                s->be_driver->ops->job_for_main_thread(NULL);
> +                usleep(10000);
> +
> +                qemu_mutex_lock(&s->state_lock);

Hmm, this may be right, but it looks dangerous to me; can the active_locty 
change while the state_lock is dropped?  What about loc[locty].state?

> +            }
> +        }
> +    }
> +
> +#ifdef DEBUG_TIS_SR
> +    fprintf(stderr,
> +            "tpm_tis: suspend: locty 0 : r_offset = %d, w_offset = %d\n",
> +            s->loc[0].r_offset, s->loc[0].w_offset);
> +    if (s->loc[0].r_offset) {
> +        tis_dump_state(opaque, 0);
> +    }
> +#endif
> +
> +    qemu_mutex_unlock(&s->state_lock);
> +
> +    /* copy current active read or write buffer into the buffer
> +       written to disk */
> +    if (IS_VALID_LOCTY(locty)) {
> +        switch (s->loc[locty].state) {

More concerns about loc[locty].state without the state_lock.

> +        case STATE_RECEPTION:
> +            memcpy(s->buf,
> +                   s->loc[locty].w_buffer.buffer,
> +                   MIN(sizeof(s->buf),
> +                       s->loc[locty].w_buffer.size));
> +            s->offset = s->loc[locty].w_offset;

Same thing, just different fields ...

> +        break;
> +        case STATE_COMPLETION:
> +            memcpy(s->buf,
> +                   s->loc[locty].r_buffer.buffer,
> +                   MIN(sizeof(s->buf),
> +                       s->loc[locty].r_buffer.size));
> +            s->offset = s->loc[locty].r_offset;

Again ...

> +        break;
> +        default:
> +            /* leak nothing */
> +            memset(s->buf, 0x0, sizeof(s->buf));

Maybe?

> +        break;
> +        }
> +    }
> +
> +    s->be_driver->ops->save_volatile_data();
> +}

-- 
paul moore
virtualization @ redhat




reply via email to

[Prev in Thread] Current Thread [Next in Thread]