qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v6 00/20] VHDX log replay and write support, .bd


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH v6 00/20] VHDX log replay and write support, .bdrv_create()
Date: Wed, 2 Oct 2013 11:03:43 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Tue, Oct 01, 2013 at 10:15:46AM -0400, Jeff Cody wrote:
> On Tue, Oct 01, 2013 at 03:41:04PM +0200, Stefan Hajnoczi wrote:
> > On Wed, Sep 25, 2013 at 05:02:45PM -0400, Jeff Cody wrote:
> > > 
> > > This patch series contains the initial VHDX log parsing, replay,
> > > write support, and image creation.
> > > 
> > > === v6 changes ===
> > > https://github.com/codyprime/qemu-kvm-jtc/tree/vhdx-write-v6-upstream
> > > 
> > > Rebased to latest qemu/master:
> > > 
> > > Patch 16/20: .bdrv_create() propagates Error, and bdrv_unref() used
> > >              instead of bdrv_delete().
> > > 
> > > Patch 17 & 18 are already included in another series:
> > >     [PATCH v3 0/3] qemu-iotests with sample images, vhdx test, cleanup
> > > 
> > > They are included here to provide a base for patches 19 & 20.  If the 
> > > above
> > > series is applied before this series, then patches 17 and 18 can be 
> > > ignored.
> > > 
> > > Patch 19/20: In qemu-io tests _make_test_img(), filter out vhdx-specific
> > >              options for .bdrv_create().
> > > 
> > > Patch 20/20: Add VHDX write test case to 064.
> > 
> > Sorry for the late review.  Feel free to poke Kevin or me if we're being
> > slow - the squeaky wheel gets the grease.
> > 
> > I left comments on a couple of the core journal and write patches.  It
> > looks pretty good overall.
> > 
> > How is the journal code tested?  IIUC the sample image file does not
> > have a dirty journal.  Please try to include an image that Hyper-V
> > created with a dirty journal so we can be sure flushing the journal
> > works.
> 
> Getting Hyper-V generated files with journals is a bit challenging.
> The methodology I've been using is a Linux guest under Hyper-V,
> generating a lot of guest i/o that will cause metadata updates (e.g. dd
> if=/dev/sda of=/dev/sdb, etc..). While this is going on, I hit the
> reset button on the server.  Kinda messy, but it works, if not very
> deterministically.  I initially tried just killing the process, but I
> was unable to do that and obtain a dirty log.
> 
> The problems with the images I've created in this manner is they are
> much too large to put in our repo.  However, if I write repetitive
> data, that should allow the image to be compressed adequately... I'll
> give that a try.
> 
> I've tested the other direction too (which is much easier) - create a
> VHDX image under QEMU, leave a dirty journal, and then open the image
> file with the dirty journal under Hyper-V.  It replayed it, and the
> sha1sum of the affected file inside the image was correct after replay.

I think this approach may be easier.  Fabricate images with various log
operations, then replay logs with both QEMU and Hyper-V, compare the
results.  You can store the sha1sum in the qemu-iotest so Hyper-V is
only needs to be run once when designing the test case.

A few test cases:

1. Wrapping the log ring buffer between entries.
2. Wrapping the log ring buffer inside an entry with data blocks.
3. Finding the active sequence in a log that contains other, outdated
   entries.
4. Applying data descriptors from the log.
5. Applying zero descriptors from the log.
6. Applying file offset values from descriptors (truncation).
7. Checksum mismatch.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]