qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] vNVRAM / blobstore design


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] vNVRAM / blobstore design
Date: Thu, 28 Mar 2013 19:39:09 +0200

On Thu, Mar 28, 2013 at 12:27:45PM -0500, Anthony Liguori wrote:
> Stefan Berger <address@hidden> writes:
> 
> > On 03/27/2013 03:12 PM, Stefan Berger wrote:
> >> On 03/27/2013 02:27 PM, Anthony Liguori wrote:
> >>> Stefan Berger <address@hidden> writes:
> >>>
> >>>> On 03/27/2013 01:14 PM, Anthony Liguori wrote:
> >>>>> Stefan Berger <address@hidden> writes:
> >>>>>
> >>>>> What I struggle with is that we're calling this a "blobstore".  Using
> >>>>> BER to store "blobs" seems kind of pointless especially when we're
> >>>>> talking about exactly three blobs.
> >>>>>
> >>>>> I suspect real hardware does something like, flash is N bytes, blob 
> >>>>> 1 is
> >>>>> a max of X bytes, blob 2 is a max of Y bytes, and blob 3 is (N - X 
> >>>>> - Y)
> >>>>> bytes.
> >>>>>
> >>>>> Do we really need to do anything more than that?
> >>>> I typically call it NVRAM, but earlier discussions seemed to prefer
> >>>> 'blobstore'.
> >>>>
> >>>> Using BER is the 2nd design of the NVRAM/blobstore. The 1st one didn't
> >>>> use any visitors but used a directory in the first sector pointing to
> >>>> the actual blobs in other sectors of the block device. The organization
> >>>> of the directory and assignment of the blobs to their sectors, aka 'the
> >>>> layout of the data' in the disk image, was handled by the
> >>>> NVRAM/blobstore implementation.
> >>> Okay, the short response is:
> >>>
> >>> Just make the TPM have a DRIVE property, drop all notion of
> >>> NVRAM/blobstore, and used fixed offsets into the BlockDriverState for
> >>> each blob.
> >>
> >> Fine by me. I don't see the need for visitors. I guess sharing of the 
> >> persistent storage between different types of devices is not a goal 
> >> here so that a layer that hides the layout and the blobs' position 
> >> within the storage would be necessary. Also fine by me for as long as 
> >> we don't come back to this discussion.
> >
> > One thing I'd like to get clarity about is the following corner-case. A 
> > user supplies some VM image as persistent storage for the TPM.
> 
> What Would Hardware Do?
> 
> If you need to provide a tool to initialize the state, then just provide
> a small tool to do that or provide device option to initialize it that
> can be used on first run or something.
> 
> Don't bother trying to add complexity with CRCs or anything like that.
> Just keep it simple.
> 
> Regards,
> 
> Anthony Liguori


External tool sounds better. Update on first use creates
nasty corner cases - use isn't a well defined thing.
So it creates nasty interactions with migration etc.

> > It 
> > contains garbage. How do we handle this case? Does the TPM then just 
> > start writing its state into this image or do we want to have some layer 
> > in place that forces a user to go through the step of formatting after 
> > that layer indicates that the data are unreadable. Besides that a 
> > completely empty image also contains garbage from the perspective of TPM 
> > persistent state and for that layer.
> >
> > My intention would (again) be to put a header in front of every blob. 
> > That header would contain a crc32 covering that header (minus the crc32 
> > field itself of course) plus the blob to determine whether the blob is 
> > garbage or not. It is similar in those terms as the 1st implementation 
> > where we also had a directory that contained that crc32 for the 
> > directory itself and for each blob. This is not a filesystem, I know that.
> >
> >     Regards,
> >        Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]