qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH 0/7] VNVRAM persistent storage
Date: Mon, 27 May 2013 10:40:44 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Fri, May 24, 2013 at 11:39:09AM -0400, Corey Bryant wrote:
> 
> 
> On 05/24/2013 08:36 AM, Stefan Hajnoczi wrote:
> >On Fri, May 24, 2013 at 08:13:27AM -0400, Stefan Berger wrote:
> >>On 05/24/2013 05:59 AM, Stefan Hajnoczi wrote:
> >>>On Thu, May 23, 2013 at 01:44:40PM -0400, Corey Bryant wrote:
> >>>>This patch series provides VNVRAM persistent storage support that
> >>>>QEMU can use internally.  The initial target user will be a software
> >>>>vTPM 1.2 backend that needs to store keys in VNVRAM and be able to
> >>>>reboot/migrate and retain the keys.
> >>>>
> >>>>This support uses QEMU's block driver to provide persistent storage
> >>>>by reading/writing VNVRAM data from/to a drive image.  The VNVRAM
> >>>>drive image is provided with the -drive command line option just like
> >>>>any other drive image and the vnvram_create() API will find it.
> >>>>
> >>>>The APIs allow for VNVRAM entries to be registered, one at a time,
> >>>>each with a maximum blob size.  Entry blobs can then be read/written
> >>>>from/to an entry on the drive.  Here's an example of usage:
> >>>>
> >>>>VNVRAM *vnvram;
> >>>>int errcode
> >>>>const VNVRAMEntryName entry_name;
> >>>>const char *blob_w = "blob data";
> >>>>char *blob_r;
> >>>>uint32_t blob_r_size;
> >>>>
> >>>>vnvram = vnvram_create("drive-ide0-0-0", false, &errcode);
> >>>>strcpy((char *)entry_name, "first-entry");
> >>>VNVRAMEntryName is very prone to buffer overflow.  I hope real code
> >>>doesn't use strcpy().  The cast is ugly, please don't hide the type.
> >>>
> >>>>vnvram_register_entry(vnvram, &entry_name, 1024);
> >>>>vnvram_write_entry(vnvram, &entry_name, (char *)blob_w, strlen(blob_w)+1);
> >>>>vnvram_read_entry(vnvram, &entry_name, &blob_r, &blob_r_size);
> >>>These are synchronous functions.  If I/O is involved then this is a
> >>>problem: QEMU will be blocked waiting for host I/O to complete and the
> >>>big QEMU lock is held.  This can cause poor guest interactivity and poor
> >>>scalability because vcpus cannot make progress, neither can the QEMU
> >>>monitor respond.
> >>
> >>The vTPM is going to run as a thread and will have to write state
> >>blobs into a bdrv. The above functions will typically be called from
> >>this thead. When I originally wrote the code, the vTPM thread could
> >>not write the blobs into bdrv directly, so I had to resort to
> >>sending a message to the main QEMU thread to write the data to the
> >>bdrv. How else could we do this?
> >
> >How else: use asynchronous APIs like bdrv_aio_writev() or the coroutine
> >versions (which eliminate the need for callbacks) like bdrv_co_writev().
> >
> >I'm preparing patches that allow the QEMU block layer to be used safely
> >outside the QEMU global mutex.  Once this is possible it would be okay
> >to use synchronous methods.
> 
> Ok thanks.  I'll use aio APIs next time around.  Just to be clear,
> does "eliminating the callback" mean I don't have to use a
> bottom-half if I use coroutine reads/writes?

I've only skimmed the patches but I think vTPM runs in its own thread
and uses a BH to kick off I/O requests since the block layer must be
called with the QEMU global mutex held.

In this case you still need the BH since its purpose is to run block
layer code in a thread that holds the QEMU global mutex.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]