qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [Qemu-devel] [PATCH 3/3] qemu-iotests: Test exiting qem


From: Kevin Wolf
Subject: Re: [Qemu-block] [Qemu-devel] [PATCH 3/3] qemu-iotests: Test exiting qemu with running job
Date: Fri, 9 Jun 2017 14:58:31 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Am 09.06.2017 um 14:14 hat Eric Blake geschrieben:
> On 06/09/2017 06:50 AM, Kevin Wolf wrote:
> > When qemu is exited, all running jobs should be cancelled successfully.
> > This adds a test for this for all types of block jobs that currently
> > exist in qemu.
> > 
> > Signed-off-by: Kevin Wolf <address@hidden>
> > ---
> >  tests/qemu-iotests/185     | 189 
> > +++++++++++++++++++++++++++++++++++++++++++++
> >  tests/qemu-iotests/185.out |  59 ++++++++++++++
> >  tests/qemu-iotests/group   |   1 +
> >  3 files changed, 249 insertions(+)
> >  create mode 100755 tests/qemu-iotests/185
> >  create mode 100644 tests/qemu-iotests/185.out
> > 
> 
> > +
> > +_send_qemu_cmd $h \
> > +    "{ 'execute': 'human-monitor-command',
> > +       'arguments': { 'command-line':
> > +                      'qemu-io disk \"write 0 4M\"' } }" \
> > +    "return"
> 
> My first reaction? "Why are we still dropping back to HMP?"  My second -
> "Oh yeah - qemu-io is a debugging interface, and we really don't
> need/want it in QMP".  This part is fine.
> 
> > +_send_qemu_cmd $h \
> > +    "{ 'execute': 'drive-backup',
> > +       'arguments': { 'device': 'disk',
> > +                      'target': '$TEST_IMG.copy',
> > +                      'format': '$IMGFMT',
> > +                      'sync': 'full',
> > +                      'speed': 65536 } }" \
> 
> Fun with slow speeds :)
> 
> 64k is slow enough compared to your 4M chunk that you should be fairly
> immune to a heavy load allowing the job to converge.  However,
> 
> > +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, 
> > "event": "SHUTDOWN", "data": {"guest": false}}
> > +{"timestamp": {"seconds":  TIMESTAMP, "microseconds":  TIMESTAMP}, 
> > "event": "BLOCK_JOB_CANCELLED", "data": {"device": "disk", "len": 67108864, 
> > "offset": 524288, "speed": 65536, "type": "commit"}}
> 
> I'm worried that if you don't sanitize at least offset, you will still
> be prone to some race conditions changing the output.  You may want to
> add in some additional filtering on the output to be safer.

I considered that at first, but then I realised that these offsets are
indeed predictable and we want to know if they change (it would likely
mean that the throttling is broken).

If you look at the individual cases, we have:

* offset=512k for (intermediate) commit and streaming. This is exactly
  the buffer size for a single request and will be followed by a delay
  of eight seconds before the next chunk is copied, so we will never get
  a different value here.

* offset=4M for active commit and mirror, because the mirror job has a
  larger buffer size by default, so one request completes it all. This
  number is already the maximum, so nothing is going to change here
  either.

* offset=64k for backup, which works cluster by cluster. We know that
  the cluster size is exactly 64k, and while we have only one second
  of delay here, that's still plenty of time for the 'quit' command to
  arrive.

Note that only the 'quit' command must be received in time on the QMP
socket, everything after that is synchronously, so even on a heavily
loaded host I don't see this fail. Well, maybe if it's swapping to
death, but then you have other problems.

So I think the offses actually make sense as part of the test.

Kevin

Attachment: pgpxYy211fZoV.pgp
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]