[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 1/2] copy-before-write: allow specifying minimum cluster size
From: |
Markus Armbruster |
Subject: |
Re: [PATCH 1/2] copy-before-write: allow specifying minimum cluster size |
Date: |
Tue, 14 May 2024 14:09:34 +0200 |
User-agent: |
Gnus/5.13 (Gnus v5.13) |
Fiona Ebner <f.ebner@proxmox.com> writes:
> Am 26.03.24 um 10:06 schrieb Markus Armbruster:
>>> @@ -365,7 +368,13 @@ BlockCopyState *block_copy_state_new(BdrvChild
>>> *source, BdrvChild *target,
>>>
>>> GLOBAL_STATE_CODE();
>>>
>>> - cluster_size = block_copy_calculate_cluster_size(target->bs, errp);
>>> + if (min_cluster_size && !is_power_of_2(min_cluster_size)) {
>>
>> min_cluster_size is int64_t, is_power_of_2() takes uint64_t. Bad if
>> min_cluster_size is negative. Could this happen?
>
> No, because it comes in as a uint32_t via the QAPI (the internal caller
> added by patch 2/2 from the backup code also gets the value via QAPI and
> there uint32_t is used too).
Good.
Is there a practical way to tweak the types so it's more obvious?
>>> + error_setg(errp, "min-cluster-size needs to be a power of 2");
>>> + return NULL;
>>> + }
>>> +
>>> + cluster_size = block_copy_calculate_cluster_size(target->bs,
>>> + min_cluster_size,
>>> errp);
>>> if (cluster_size < 0) {
>>> return NULL;
>>> }
>
> ---snip---
>
>>> diff --git a/qapi/block-core.json b/qapi/block-core.json
>>> index 0a72c590a8..85c8f88f6e 100644
>>> --- a/qapi/block-core.json
>>> +++ b/qapi/block-core.json
>>> @@ -4625,12 +4625,18 @@
>>> # @on-cbw-error parameter will decide how this failure is handled.
>>> # Default 0. (Since 7.1)
>>> #
>>> +# @min-cluster-size: Minimum size of blocks used by copy-before-write
>>> +# operations. Has to be a power of 2. No effect if smaller than
>>> +# the maximum of the target's cluster size and 64 KiB. Default 0.
>>> +# (Since 9.0)
>>> +#
>>> # Since: 6.2
>>> ##
>>> { 'struct': 'BlockdevOptionsCbw',
>>> 'base': 'BlockdevOptionsGenericFormat',
>>> 'data': { 'target': 'BlockdevRef', '*bitmap': 'BlockDirtyBitmap',
>>> - '*on-cbw-error': 'OnCbwError', '*cbw-timeout': 'uint32' } }
>>> + '*on-cbw-error': 'OnCbwError', '*cbw-timeout': 'uint32',
>>> + '*min-cluster-size': 'uint32' } }
>>
>> Elsewhere in the schema, we use either 'int' or 'size' for cluster-size.
>> Why the difference?
>
> The motivation was to disallow negative values up front and have it work
> with block_copy_calculate_cluster_size(), whose result is an int64_t.
Let's see whether I understand.
cbw_open() passes the new member @min-cluster-size to
block_copy_state_new().
block_copy_state_new() checks it, and passes it on to
block_copy_calculate_cluster_size(). This is the C code shown above.
block_copy_calculate_cluster_size() uses it like
return MAX(min_cluster_size, BLOCK_COPY_CLUSTER_SIZE_DEFAULT);
and
return MAX(min_cluster_size,
MAX(BLOCK_COPY_CLUSTER_SIZE_DEFAULT, bdi.cluster_size));
BLOCK_COPY_CLUSTER_SIZE_DEFAULT and bdi.cluster_size are int. The
return type is int64_t.
Correct?
I don't like mixing signed and unsigned in MAX() like this. Figuring
out whether it's safe takes a C language lawyer. I'd rather avoid such
subtle code. Can we please compute these maxima entirely in the
destination type int64_t?
> If
> I go with 'int', I'll have to add a check to disallow negative values.
> If I go with 'size', I'll have to add a check for to disallow too large
> values.
>
> Which approach should I go with?
For me, reducing the need for manual checking is important, but
cleanliness of the external interface trumps it. I'd use 'size'.
Not the first time I see friction between QAPI 'size' or 'uint64' and
the block layer's use of int64_t.
The block layer likes to use int64_t even when the value must not be
negative. There are good reasons for that.
Perhaps a QAPI type for "non-negative int64_t" could be useful. Weird,
but useful.