qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v8 1/4] block: add the block queue support


From: Zhi Yong Wu
Subject: Re: [Qemu-devel] [PATCH v8 1/4] block: add the block queue support
Date: Tue, 18 Oct 2011 17:29:03 +0800

On Tue, Oct 18, 2011 at 4:36 PM, Kevin Wolf <address@hidden> wrote:
> Am 18.10.2011 10:07, schrieb Zhi Yong Wu:
>> On Mon, Oct 17, 2011 at 6:17 PM, Kevin Wolf <address@hidden> wrote:
>>>>>> +
>>>>>> +typedef struct BlockQueueAIOCB BlockQueueAIOCB;
>>>>>> +
>>>>>> +struct BlockQueue {
>>>>>> +    QTAILQ_HEAD(requests, BlockQueueAIOCB) requests;
>>>>>> +    bool req_failed;
>>>>>> +    bool flushing;
>>>>>> +};
>>>>>
>>>>> I find req_failed pretty confusing. Needs documentation at least, but
>>>>> most probably also a better name.
>>>> OK. request_has_failed?
>>>
>>> No, that doesn't describe what it's really doing.
>>>
>>> You set req_failed = true by default and then on some obscure condition
>>> clear it or not. It's tracking something, but I'm not sure what meaning
>>> it has during the whole process.
>> In qemu_block_queue_flush,
>> When bdrv_aio_readv/writev return NULL, if req_failed is changed to
>> false, it indicates that the request exceeds the limits again; if
>> req_failed is still true, it indicates that one I/O error takes place,
>> at the monent, qemu_block_queue_callback(request, -EIO) need to be
>> called to report this to upper layer.
>
> Okay, this makes some more sense now.
>
> How about reversing the logic and maintaining a limit_exceeded flag
> instead of a req_failed? I think this would be clearer.
OK
>
>>>>>> +void qemu_del_block_queue(BlockQueue *queue)
>>>>>> +{
>>>>>> +    BlockQueueAIOCB *request, *next;
>>>>>> +
>>>>>> +    QTAILQ_FOREACH_SAFE(request, &queue->requests, entry, next) {
>>>>>> +        QTAILQ_REMOVE(&queue->requests, request, entry);
>>>>>> +        qemu_aio_release(request);
>>>>>> +    }
>>>>>> +
>>>>>> +    g_free(queue);
>>>>>> +}
>>>>>
>>>>> Can we be sure that no AIO requests are in flight that still use the now
>>>>> released AIOCB?
>>>> Yeah, since qemu core code is serially performed, i think that when
>>>> qemu_del_block_queue is performed, no requests are in flight. Right?
>>>
>>> Patch 2 has this code:
>>>
>>> +void bdrv_io_limits_disable(BlockDriverState *bs)
>>> +{
>>> +    bs->io_limits_enabled = false;
>>> +
>>> +    if (bs->block_queue) {
>>> +        qemu_block_queue_flush(bs->block_queue);
>>> +        qemu_del_block_queue(bs->block_queue);
>>> +        bs->block_queue = NULL;
>>> +    }
>>>
>>> Does this mean that you can't disable I/O limits while the VM is running?
>> NO, you can even though VM is running.
>
> Okay, in general qemu_block_queue_flush() empties the queue so that
> there are no requests left that qemu_del_block_queue() could drop from
> the queue. So in the common case it doesn't even enter the FOREACH loop.
I think that we should adopt !QTAILQ_EMPTY(&queue->requests), not
QTAILQ_FOREACH_SAFE in qemu_del_block_queue(),
right?
>
> I think problems start when requests have failed or exceeded the limit
> again, then you have requests queued even after
> qemu_block_queue_flush(). You must be aware of this, otherwise the code
> in qemu_del_block_queue() wouldn't exist.
>
> But you can't release the ACBs without having called their callback,
> otherwise the caller would still assume that its ACB pointer is valid.
> Maybe calling the callback before releasing the ACB would be enough.
Good, thanks.
>
> However, for failed requests see below.
>
>>>
>>>>>> +
>>>>>> +BlockDriverAIOCB *qemu_block_queue_enqueue(BlockQueue *queue,
>>>>>> +                        BlockDriverState *bs,
>>>>>> +                        BlockRequestHandler *handler,
>>>>>> +                        int64_t sector_num,
>>>>>> +                        QEMUIOVector *qiov,
>>>>>> +                        int nb_sectors,
>>>>>> +                        BlockDriverCompletionFunc *cb,
>>>>>> +                        void *opaque)
>>>>>> +{
>>>>>> +    BlockDriverAIOCB *acb;
>>>>>> +    BlockQueueAIOCB *request;
>>>>>> +
>>>>>> +    if (queue->flushing) {
>>>>>> +        queue->req_failed = false;
>>>>>> +        return NULL;
>>>>>> +    } else {
>>>>>> +        acb = qemu_aio_get(&block_queue_pool, bs,
>>>>>> +                           cb, opaque);
>>>>>> +        request = container_of(acb, BlockQueueAIOCB, common);
>>>>>> +        request->handler       = handler;
>>>>>> +        request->sector_num    = sector_num;
>>>>>> +        request->qiov          = qiov;
>>>>>> +        request->nb_sectors    = nb_sectors;
>>>>>> +        request->real_acb      = NULL;
>>>>>> +        QTAILQ_INSERT_TAIL(&queue->requests, request, entry);
>>>>>> +    }
>>>>>> +
>>>>>> +    return acb;
>>>>>> +}
>>>>>> +
>>>>>> +static int qemu_block_queue_handler(BlockQueueAIOCB *request)
>>>>>> +{
>>>>>> +    int ret;
>>>>>> +    BlockDriverAIOCB *res;
>>>>>> +
>>>>>> +    res = request->handler(request->common.bs, request->sector_num,
>>>>>> +                           request->qiov, request->nb_sectors,
>>>>>> +                           qemu_block_queue_callback, request);
>>>>>> +    if (res) {
>>>>>> +        request->real_acb = res;
>>>>>> +    }
>>>>>> +
>>>>>> +    ret = (res == NULL) ? 0 : 1;
>>>>>> +
>>>>>> +    return ret;
>>>>>
>>>>> You mean return (res != NULL); and want to have bool as the return value
>>>>> of this function.
>>>> Yeah, thanks. i will modify as below:
>>>> ret = (res == NULL) ? false : true;
>>>
>>> ret = (res != NULL) is really more readable.
>> I have adopted Paolo's suggestion.
>>
>>>
>>>> and
>>>> static bool qemu_block_queue_handler()
>>>>
>>>>>
>>>>>> +}
>>>>>> +
>>>>>> +void qemu_block_queue_flush(BlockQueue *queue)
>>>>>> +{
>>>>>> +    queue->flushing = true;
>>>>>> +    while (!QTAILQ_EMPTY(&queue->requests)) {
>>>>>> +        BlockQueueAIOCB *request = NULL;
>>>>>> +        int ret = 0;
>>>>>> +
>>>>>> +        request = QTAILQ_FIRST(&queue->requests);
>>>>>> +        QTAILQ_REMOVE(&queue->requests, request, entry);
>>>>>> +
>>>>>> +        queue->req_failed = true;
>>>>>> +        ret = qemu_block_queue_handler(request);
>>>>>> +        if (ret == 0) {
>>>>>> +            QTAILQ_INSERT_HEAD(&queue->requests, request, entry);
>>>>>> +            if (queue->req_failed) {
>>>>>> +                qemu_block_queue_callback(request, -EIO);
>>>>>> +                break;
>
> When a request has failed, you call its callback, but still leave it
> queued. I think this is wrong.
Right, we should not insert the failed request back to block queue.
>
>>>>>> +            }
>>>>>> +        }
>>>>>> +    }
>>>>>> +
>>>>>> +    queue->req_failed = true;
>>>>>> +    queue->flushing   = false;
>>>>>> +}
>>>>>> +
>>>>>> +bool qemu_block_queue_has_pending(BlockQueue *queue)
>>>>>> +{
>>>>>> +    return !queue->flushing && !QTAILQ_EMPTY(&queue->requests);
>>>>>> +}
>>>>>
>>>>> Why doesn't the queue have pending requests in the middle of a flush
>>>>> operation? (That is, the flush hasn't completed yet)
>>>> It is possible for the queue to have pending requests. if yes, how about?
>>>
>>> Sorry, can't parse this.
>>>
>>> I don't understand why the !queue->flushing part is correct.
>
> What about this?
When bdrv_aio_readv/writev handle one request, it will determine if
block queue is not being flushed and isn't NULL; if yes, It assume
that this request is one new request from upper layer, so it won't
determine if the I/O rate at runtime has exceeded the limits, but
immediately insert it into block queue.

>
> Kevin
>



-- 
Regards,

Zhi Yong Wu



reply via email to

[Prev in Thread] Current Thread [Next in Thread]