qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] drive_del vs. device_del: what should come first?


From: Markus Armbruster
Subject: Re: [Qemu-devel] drive_del vs. device_del: what should come first?
Date: Wed, 02 Apr 2014 19:40:09 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.2 (gnu/linux)

Heinz Graalfs <address@hidden> writes:

> On 01/04/14 17:48, Markus Armbruster wrote:
>> Heinz Graalfs <address@hidden> writes:
>>
>>> Hi Kevin,
>>>
>>> doing a
>>>
>>>       virsh detach-device ...
>>>
>>> ends up in the following QEMU monitor commands:
>>>
>>> 1. device_del ...
>>> 2. drive_del ...
>>>
>>> qmp_device_del() performs the device unplug path.
>>> In case of a block device do_drive_del() tries to
>>> prevent further IO against the host device.
>>>
>>> However, bdrv_find() during drive_del() results in
>>> an error, because the device is already gone. Due to
>>> this error all the bdrv_xxx calls to quiesce the block
>>> driver as well as all other processing is skipped.
>>>
>>> Is the sequence that libvirt triggers OK?
>>> Shouldn't drive_del be executed first?
>>
>> No.
>
> OK, I see. The drive is deleted implicitly (release_drive()).
> Doing a device_del() requires another drive_add() AND device_add().

Yes.  The automatic drive delete on unplug is a wart that will not be
preserved in the new interfaces we're building now.

> (Doing just a device_add() complains about the missing drive.
> A subsequent info qtree lets QEMU abort.)

Really?  Got a reproducer for me?

>> drive_del is nasty.  Its purpose is to revoke access to an image even
>> when the guest refuses to cooperate.  To the guest, this looks like
>> hardware failure.
>
> Deleting a device during active IO is nasty and it should look like a
> hardware failure. I would expect lots of errors.
>
>>
>> If you drive_del before device_del, even a perfectly well-behaved guest
>> guest is exposed to a terminally broken device between drive_del and
>> completion of unplug.
>
> The early drive_del() would mean that no further IO would be
> possible.

A polite way to describe the effect of drive_del on the guest.

drive_del makes all attempts at actual I/O error out without any
forewarning whatsoever.  If you do that to your guest, and something
breaks, you get to keep the pieces.  Even if you "only" do it for a
short period of time.

>> Always try a device_del first, and only if that does not complete within
>> reasonable time, and you absolutely must revoke access to the image,
>> then whack it over the head with drive_del.
>
> What is this reasonable time?

Depends on how heavily loaded host and guest are.  Suggest to measure
with a suitably thrashing guest, then shift left to taste.

> On 390 we experience problems (QEMU abort) when asynch block IO
> completes and the virtqueues are already gone. I suppose the
> bdrv_drain_all() in bdrv_close() is a little late. I don't see such
> problems with an early bdrv_drain_all() (drive_del) and an unplug
> (device_del) afterwards.

Sounds like a bug.  Got a reproducer?



reply via email to

[Prev in Thread] Current Thread [Next in Thread]