qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] vNVRAM / blobstore design


From: Corey Bryant
Subject: Re: [Qemu-devel] vNVRAM / blobstore design
Date: Wed, 27 Mar 2013 11:20:43 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130311 Thunderbird/17.0.4



On 03/27/2013 11:17 AM, Corey Bryant wrote:


On 03/25/2013 06:20 PM, Stefan Berger wrote:
On 03/25/2013 06:05 PM, Anthony Liguori wrote:
Stefan Berger <address@hidden> writes:

[argh, just posted this to qemu-trivial -- it's not trivial]


Hello!

I am posting this message to revive the previous discussions about the
design of vNVRAM / blobstore cc'ing (at least) those that participated
in this discussion 'back then'.

The first goal of the implementation is to provide an vNVRAM storage
for
a software implementation of a TPM to store its different blobs into.
Some of the data that the TPM writes into persistent memory needs to
survive a power down / power up cycle of a virtual machine, therefore
this type of persistent storage is needed. For the vNVRAM not to become
a road-block for VM migration, we would make use of block device
migration and layer the vNVRAM on top of the block device, therefore
using virtual machine images for storing the vNVRAM data.

Besides the TPM blobs the vNVRAM should of course also be able able to
accommodate other use cases where persistent data is stored into
NVRAM,
Well let's focus more on the "blob store".  What are the semantics of
this?  Is there a max number of blobs?  Are the sizes fixed or variable?
How often are new blobs added/removed?

The max number of blobs and frequency of usage depends on the usage
scenario and NVRAM size.  But that's probably obvious.

I think we should focus on worst case scenarios where NVRAM is filled up
and used frequently.

One example is that an application can use TSS APIs to define, undefine,
read, and write to the TPM's NVRAM storage.  (The TPM owner password is
required to define NVRAM data.)  An application could potentially fill
up NVRAM and frequently store, change, retrieve data in various places
within NVRAM.  And the data could have various sizes.

For an example of total NVRAM size, Infineon's TPM has 16K of NVRAM.

--
Regards,
Corey Bryant


I just wanted to add that we could really use some direction on which way the community would prefer we go with this. The 2 options that are on the table at the moment for encoding/decoding the vNVRAM byte stream are BER or JSON visitors.

--
Regards,
Corey Bryant


In case of TPM 1.2 there are 3 blobs that can be written at different
times for different reasons.

Examples: As with a real-world TPM users loading an owner-evict key into
the TPM will cause the TPM to write that owner-evict key into is own
NVRAM. This key survives a power-off of the machine. Further, the TPM
models its own NVRAM slots. Someone writing into this type of memory
will cause data to be written into the NVRAM. There are other commands
that the TPM offers that will cause data to be written into NVRAM which
users can invoke at any time.

The sizes of the NVRAM blobs of the TPM at least vary in size but I
handle this in the TPM emulation to pad them to fixed size. Depending on
how many such owner-evict keys are loaded into the TPM its permanent
state blob size may vary. Other devices may act differently.

We have a-priori knowledge about  the 3 different types of blobs the TPM
device produces. They are 'registered' once at the beginning (see API)
and are not 'removed' as such.

Regards,
     Stefan






reply via email to

[Prev in Thread] Current Thread [Next in Thread]