qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2] qemu-iotests: add qed.py image manipulation


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH v2] qemu-iotests: add qed.py image manipulation utility
Date: Thu, 26 Jul 2012 13:11:35 +0100
User-agent: Mutt/1.5.21 (2010-09-15)

On Thu, Jul 26, 2012 at 01:34:06PM +0200, Kevin Wolf wrote:
> Am 26.07.2012 09:08, schrieb Stefan Hajnoczi:
> > The qed.py utility can inspect and manipulate QED image files.  It can
> > be used for testing to see the state of image metadata and also to
> > inject corruptions into the image file.  It also has a scrubbing feature
> > to copy just the metadata out of an image file, allowing users to share
> > broken image files without revealing data in bug reports.
> > 
> > This has lived in my local repo for a long time but could be useful
> > to others.  There are two use cases:
> > 
> >  1. qemu-iotests that need to manipulate (e.g. corrupt) QED image files.
> >  2. Users that want to inspect or recover their QED image files.
> > 
> > Signed-off-by: Stefan Hajnoczi <address@hidden>
> > ---
> > Dong Xu Wang <address@hidden> has contributed generic qemu-img info
> > support for fragmentation statistics and dirty flag status.  I have dropped
> > fragmentation statistics from qed.py.  Setting the dirty flag is still
> > supported in qed.py for testing.
> > 
> >  tests/qemu-iotests/qed.py |  234 
> > +++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 234 insertions(+)
> >  create mode 100755 tests/qemu-iotests/qed.py
> 
> > +def random_table_item(table):
> > +    return random.choice([(index, offset) for index, offset in 
> > enumerate(table) if offset != 0])
> > +
> > +def corrupt_table_duplicate(table):
> > +    '''Corrupt a table by introducing a duplicate offset'''
> > +    _, dup_victim = random_table_item(table)
> > +
> > +    for i in xrange(len(table)):
> > +        dup_target = random.randint(0, len(table) - 1)
> > +        if table[dup_target] != dup_victim:
> > +            table[dup_target] = dup_victim
> > +            return
> > +    raise Exception('no duplication corruption possible in table')
> 
> At least the message isn't quite correct. If you have a table that is
> mostly the same (probably unallocated), and has one allocated entry. In
> this situation the chances should be relatively high that the random
> number never hits the one different entry.
> 
> Not sure how bad this "relatively high" really is, but I could imagine
> that we would see an occasional false positive if a test case used this.

The loop is silly, I will replace it with a better solution.

> > +def cmd_need_check(qed, *args):
> > +    '''need-check [on|off] - Test, set, or clear the QED_F_NEED_CHECK 
> > header bit'''
> > +    if not args:
> > +        print bool(qed.header['features'] & QED_F_NEED_CHECK)
> > +        return
> > +
> > +    if args[0] == 'on':
> > +        qed.header['features'] |= QED_F_NEED_CHECK
> > +    elif args[1] == 'off':
> 
> args[0]

Good catch

> > +        qed.header['features'] &= ~QED_F_NEED_CHECK
> > +    else:
> > +        err('unrecognized sub-command')
> > +    qed.store_header()
> > +
> > +def cmd_zero_cluster(qed, pos, *args):
> > +    '''zero-cluster <pos> [<n>] - Zero data clusters'''
> > +    pos, n = int(pos), 1
> > +    if args:
> > +        if len(args) != 1:
> > +            err('expected one argument')
> > +        n = int(args[0])
> > +
> > +    for i in xrange(n):
> > +        l1_index = pos / qed.header['cluster_size'] / len(qed.l1_table)
> > +        if qed.l1_table[l1_index] == 0:
> > +            err('no l2 table allocated')
> > +
> > +        l2_offset = qed.l1_table[l1_index]
> > +        l2_table = qed.read_table(l2_offset)
> > +
> > +        l2_index = (pos / qed.header['cluster_size']) % len(qed.l1_table)
> > +        l2_table[l2_index] = 1 # zero the data cluster
> > +        qed.write_table(l2_offset, l2_table)
> > +        pos += qed.header['cluster_size']
> 
> Isn't it quite slow to write the table after each updated cluster? But
> okay, probably works good enough for small test cases.

Yes, it is slow but I never noticed an issue.  I would like to leave
this.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]