qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] vNVRAM / blobstore design


From: Stefan Berger
Subject: Re: [Qemu-devel] vNVRAM / blobstore design
Date: Mon, 25 Mar 2013 18:20:24 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120911 Thunderbird/15.0.1

On 03/25/2013 06:05 PM, Anthony Liguori wrote:
Stefan Berger <address@hidden> writes:

[argh, just posted this to qemu-trivial -- it's not trivial]


Hello!

I am posting this message to revive the previous discussions about the
design of vNVRAM / blobstore cc'ing (at least) those that participated
in this discussion 'back then'.

The first goal of the implementation is to provide an vNVRAM storage for
a software implementation of a TPM to store its different blobs into.
Some of the data that the TPM writes into persistent memory needs to
survive a power down / power up cycle of a virtual machine, therefore
this type of persistent storage is needed. For the vNVRAM not to become
a road-block for VM migration, we would make use of block device
migration and layer the vNVRAM on top of the block device, therefore
using virtual machine images for storing the vNVRAM data.

Besides the TPM blobs the vNVRAM should of course also be able able to
accommodate other use cases where persistent data is stored into
NVRAM,
Well let's focus more on the "blob store".  What are the semantics of
this?  Is there a max number of blobs?  Are the sizes fixed or variable?
How often are new blobs added/removed?

In case of TPM 1.2 there are 3 blobs that can be written at different times for different reasons.

Examples: As with a real-world TPM users loading an owner-evict key into the TPM will cause the TPM to write that owner-evict key into is own NVRAM. This key survives a power-off of the machine. Further, the TPM models its own NVRAM slots. Someone writing into this type of memory will cause data to be written into the NVRAM. There are other commands that the TPM offers that will cause data to be written into NVRAM which users can invoke at any time.

The sizes of the NVRAM blobs of the TPM at least vary in size but I handle this in the TPM emulation to pad them to fixed size. Depending on how many such owner-evict keys are loaded into the TPM its permanent state blob size may vary. Other devices may act differently.

We have a-priori knowledge about the 3 different types of blobs the TPM device produces. They are 'registered' once at the beginning (see API) and are not 'removed' as such.

Regards,
    Stefan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]