qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH v2 5/7] nbd-client: Short-circuit 0-length opera


From: Eric Blake
Subject: Re: [Qemu-block] [PATCH v2 5/7] nbd-client: Short-circuit 0-length operations
Date: Thu, 9 Nov 2017 08:44:15 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0

On 11/09/2017 03:20 AM, Vladimir Sementsov-Ogievskiy wrote:
> 09.11.2017 00:57, Eric Blake wrote:
>> The NBD spec was recently clarified to state that clients should
>> not send 0-length requests to the server, as the server behavior
>> is undefined [1].  We know that qemu-nbd's behavior is a successful
>> no-op (once it has filtered for read-only exports), but other NBD
>> implementations might return an error.  To avoid any questionable
>> server implementations, it is better to just short-circuit such
>> requests on the client side (we are relying on the block layer to
>> already filter out requests such as invalid offset, write to a
>> read-only volume, and so forth).
>>
>> [1] https://github.com/NetworkBlockDevice/nbd/commit/ee926037
>>
>> Signed-off-by: Eric Blake <address@hidden>
> 
> Reviewed-by: Vladimir Sementsov-Ogievskiy <address@hidden>
> 

>> @@ -705,6 +708,9 @@ int nbd_client_co_pwritev(BlockDriverState *bs,
>> uint64_t offset,
>>
>>       assert(bytes <= NBD_MAX_BUFFER_SIZE);
>>
>> +    if (!bytes) {
>> +        return 0;
>> +    }
> 
> we don't do this check before flags manipulation to do not miss possible
> asserts...
> 
>>       return nbd_co_request(bs, &request, qiov);

Correct - I put the short-circuit as late as possible to ensure that
preconditions are still being met.  I can tweak the commit message to
make that more obvious, if desired.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]