qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/6] block: Keep track of devices' I/O status


From: Markus Armbruster
Subject: Re: [Qemu-devel] [PATCH 1/6] block: Keep track of devices' I/O status
Date: Mon, 26 Sep 2011 15:41:39 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/23.2 (gnu/linux)

Luiz Capitulino <address@hidden> writes:

> On Mon, 26 Sep 2011 11:34:34 +0200
> Markus Armbruster <address@hidden> wrote:
>
>> Luiz Capitulino <address@hidden> writes:
>> 
>> > On Fri, 23 Sep 2011 17:36:53 +0200
>> > Markus Armbruster <address@hidden> wrote:
>> >
>> >> Luiz Capitulino <address@hidden> writes:
>> >> 
>> >> > On Fri, 23 Sep 2011 09:51:24 +0200
>> >> > Markus Armbruster <address@hidden> wrote:
>> >> >
>> >> >> Luiz Capitulino <address@hidden> writes:
>> >> >> 
>> >> >> > This commit adds support to the BlockDriverState type to keep track
>> >> >> > of devices' I/O status.
>> >> >> >
>> >> >> > There are three possible status: BDRV_IOS_OK (no error), 
>> >> >> > BDRV_IOS_ENOSPC
>> >> >> > (no space error) and BDRV_IOS_FAILED (any other error). The 
>> >> >> > distinction
>> >> >> > between no space and other errors is important because a management
>> >> >> > application may want to watch for no space in order to extend the
>> >> >> > space assigned to the VM and put it to run again.
>> >> >> >
>> >> >> > Qemu devices supporting the I/O status feature have to enable it
>> >> >> > explicitly by calling bdrv_iostatus_enable() _and_ have to be
>> >> >> > configured to stop the VM on errors (ie. werror=stop|enospc or
>> >> >> > rerror=stop).
>> >> >> >
>> >> >> > In case of multiple errors being triggered in sequence only the first
>> >> >> > one is stored. The I/O status is always reset to BDRV_IOS_OK when the
>> >> >> > 'cont' command is issued.
>> >> >> >
>> >> >> > Next commits will add support to some devices and extend the
>> >> >> > query-block/info block commands to return the I/O status information.
>> >> >> >
>> >> >> > Signed-off-by: Luiz Capitulino <address@hidden>
>> >> >> > ---
>> >> >> >  block.c     |   32 ++++++++++++++++++++++++++++++++
>> >> >> >  block.h     |    9 +++++++++
>> >> >> >  block_int.h |    1 +
>> >> >> >  monitor.c   |    6 ++++++
>> >> >> >  4 files changed, 48 insertions(+), 0 deletions(-)
>> >> >> >
>> >> >> > diff --git a/block.c b/block.c
>> >> >> > index e3fe97f..fbd90b4 100644
>> >> >> > --- a/block.c
>> >> >> > +++ b/block.c
>> >> >> > @@ -221,6 +221,7 @@ BlockDriverState *bdrv_new(const char 
>> >> >> > *device_name)
>> >> >> >      if (device_name[0] != '\0') {
>> >> >> >          QTAILQ_INSERT_TAIL(&bdrv_states, bs, list);
>> >> >> >      }
>> >> >> > +    bs->iostatus = BDRV_IOS_INVAL;
>> >> >> >      return bs;
>> >> >> >  }
>> >> >> >  
>> >> >> > @@ -3181,6 +3182,37 @@ int bdrv_in_use(BlockDriverState *bs)
>> >> >> >      return bs->in_use;
>> >> >> >  }
>> >> >> >  
>> >> >> > +void bdrv_iostatus_enable(BlockDriverState *bs)
>> >> >> > +{
>> >> >> > +    assert(bs->iostatus == BDRV_IOS_INVAL);
>> >> >> > +    bs->iostatus = BDRV_IOS_OK;
>> >> >> > +}
>> >> >> 
>> >> >> bdrv_new() creates the BDS with I/O status tracking disabled.  Devices
>> >> >> that do tracking declare that by calling this function during
>> >> >> initialization.  Enables tracking if the BDS has suitable 
>> >> >> on_write_error
>> >> >> and on_read_error.
>> >> >> 
>> >> >> If a device gets hot unplugged, tracking remains enabled.  If the BDS
>> >> >> then gets reused with a device that doesn't do tracking, I/O status
>> >> >> becomes incorrect.  Can't happen right now, because we automatically
>> >> >> delete the BDS on hot unplug, but it's a trap.
>> >> >> 
>> >> >> Suggest to disable tracking in bdrv_detach_dev() or bdrv_attach_dev().
>> >> >
>> >> > Ok, and in bdrv_close() as Zhi Yong said.
>> >> 
>> >> Disabling in bdrv_close() makes the status go away on eject.  Makes some
>> >> sense, in a way.  Also complicates the simple rule "status persists from
>> >> stop to cont".  Hmm, consider the following race:
>> >> 
>> >> 1. Management app sends eject command
>> >> 
>> >> 2. qemu gets read error on the same drive, VM stops, I/O status set,
>> >> QEVENT_BLOCK_IO_ERROR sent to management app.
>> >> 
>> >> 3. qemu receives eject command, bdrv_close() resets I/O status
>> >> 
>> >> 4. Management receives QEVENT_BLOCK_IO_ERROR, and would like to find out
>> >> which drive caused it.
>> >> 
>> >> If 2 happens before 3, I/O status is lost.
>> >
>> > Honestly speaking, I didn't intend to make it persistent on eject. Do we 
>> > really
>> > want to support this? I find it a bit confusing to have the I/O status on
>> > something that has no media. In that case we should only return the 
>> > io-status
>> > info if the media is inserted.
>> >
>> > This way, if the VM stops due to an I/O error and the media is ejected, a 
>> > mngt
>> > application would:
>> >
>> >  1. Issue the query-block and find no guilt
>> >  2. Issue cont
>> >  3. The VM would execute normally
>> >
>> > Of course that we have to be clear about it in the documentation and a
>> > mngt app has to be prepared to deal with it.
>> 
>> The management app may well want to know that we hit errors on the
>> media, even when the error happens close to an eject.  In particular,
>> they probably want to alert their users on write errors.  Just because
>> you eject a medium doesn't mean you're no longer interested in the data
>> on it ;)
>
> Ok, so I'll have to reset the I/O status on bdrv_open() too, so that
> a new inserted media won't contain the previous media status.

Same race with "change" (which calls bdrv_close() and bdrv_open())
instead of "eject" (which only calls bdrv_close()).

I figure we better define the problem away, like this: I/O status is
what made the VM stop.  It's valid from VM stop to VM continue or drive
delete, whatever comes first.  In particular, media change doesn't
affect it.

The only alternative I can see right now is making media change fail
while I/O status isn't okay, but that "cure" looks far worse than the
disease, and I'm not at all sure it's sufficient.

Two more instances of the race, both caused by destroying the drive (and
thus I/O status) while it's being used by a device (and thus prone to
getting I/O status set):

1. Device hot unplug while drive is not empty

Possible fix: stop auto-destroying drives on device unplug.  Then a
management app can unplug the device, wait until it's gone, then delete
the drive.  Safe, because I/O status can't change anymore once the
device detached from the drive.

2. drive_del of non-empty drive while a device is attached.

Possible fix: split into detach and delete.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]