qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 0/5]: QMP: Proper thin provisioning support


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH v2 0/5]: QMP: Proper thin provisioning support
Date: Thu, 04 Aug 2011 10:10:28 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:5.0) Gecko/20110707 Thunderbird/5.0

Am 03.08.2011 20:08, schrieb Luiz Capitulino:
> On Wed, 03 Aug 2011 17:39:04 +0200
> Kevin Wolf <address@hidden> wrote:
> 
>> Am 03.08.2011 17:19, schrieb Luiz Capitulino:
>>> Roughly speaking, thin provisioning is a feature where the VM is started 
>>> with
>>> a small disk and space is allocated on demand.
>>>
>>> It's already possible for QMP clients to implement this feature by using
>>> the BLOCK_IO_ERROR event. However, the event can be missed. When this
>>> happens QMP clients need a way to query if any block device has hit a
>>> no space condition.
>>>
>>> This is what this series is about: it extends the query-block command to
>>> contain the disk's I/O status.
>>>
>>> Please, note that this series depends on the following two other series:
>>>
>>>  1. [PATCH 00/55] Block layer cleanup & fixes (by Markus)
>>>  2. [PATCH 0/7]: Introduce the QemuState type (by me)
>>
>> It feels strange that device models are involved with this. The block
>> layer already knows the error number, it just tells the device model so
>> that it can ask it back. Wouldn't it be easier to store it in the
>> BlockDriverState?
> 
> My first version did it, but looking at it now maybe I did it wrong,
> because I was setting the iostatus in the devices (instead of setting
> it in the block layer itself). Here's an example:
> 
>  http://lists.nongnu.org/archive/html/qemu-devel/2011-07/msg00421.html
> 
> But I'm not sure what the best place is. I'm hoping that you and/or
> Markus can help me here. Also note that Markus already told us what
> he thinks:
> 
> """
> Actually, this is a symptom of the midlayer disease.  I suspect things
> would be simpler if we hold the status in its rightful owner, the device
> model.  Need a getter for it.  I'm working on a patch series that moves
> misplaced state out of the block layer into device models and block
> drivers, and a I/O status getter will fit in easily there.
> """
> 
>  http://lists.nongnu.org/archive/html/qemu-devel/2011-07/msg00937.html
> 
> Can you guys get in agreement?

I think you can argue either way, but the main point is that we should
clearly assign it to either host or guest state, but not involve both sides.

For me, an ENOSPC is clearly host state, so I would prefer to see it
done in the block layer. An I/O status property of a device I would
expect to provide whatever the guest sees and that's not an ENOSPC.

>> Also, as I already commented on v1, I think it's a bad idea to let
>> requests that complete later overwrite the error code with "success".
>> Quite possible in an ENOSPC case, I think.
> 
> Really? I'm under the assumption that all pending I/O will be completed
> as soon as the vm is stopped. Am I wrong?

Yes, vm_stop() will flush all requests. And of course this means that
any request that completes successfully during this flush will reset the
error value.

> What we need here is a way to "reset" the io-status field. That's, the vm will
> stop due to ENOSPC, the mngt application will handle it and put the vm to run
> again. At this point the io-status has to be reset otherwise it will contain
> stale data.
> 
> By doing the overwrite this "reset" is done automatically as soon as the
> vm is put to run again and doesn't hit the same (or other) errors.
> 
> The other option would be to allow the mngt application to manually reset
> the field.
> 
> More ideas?

Maybe we could do it automatically in 'cont'. Not sure if this is too
much magic, but if the VM isn't stopped you can't do anything useful
with the value anyway.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]