[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-block] [PATCH for-2.12 v5] iotests: Test abnormally large size
From: |
Max Reitz |
Subject: |
Re: [Qemu-block] [PATCH for-2.12 v5] iotests: Test abnormally large size in compressed cluster descriptor |
Date: |
Thu, 29 Mar 2018 18:04:42 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 |
On 2018-03-29 14:07, Alberto Garcia wrote:
> L2 entries for compressed clusters have a field that indicates the
> number of sectors used to store the data in the image.
>
> That's however not the size of the compressed data itself, just the
> number of sectors where that data is located. The actual data size is
> usually not a multiple of the sector size, and therefore cannot be
> represented with this field.
>
> The way it works is that QEMU reads all the specified sectors and
> starts decompressing the data until there's enough to recover the
> original uncompressed cluster. If there are any bytes left that
> haven't been decompressed they are simply ignored.
>
> One consequence of this is that even if the size field is larger than
> it needs to be QEMU can handle it just fine: it will read more data
> from disk but it will ignore the extra bytes.
>
> This test creates an image with two compressed clusters that use 5
> sectors (2.5 KB) each, increases the size field to the maximum (8192
> sectors, or 4 MB) and verifies that the data can be read without
> problems.
>
> This test is important because while the decompressed data takes
> exactly one cluster, the maximum value allowed in the compressed size
> field is twice the cluster size. So although QEMU won't produce images
> with such large values we need to make sure that it can handle them.
>
> Another effect of increasing the size field is that it can make
> it include data from the following host cluster(s). In this case
> 'qemu-img check' will detect that the refcounts are not correct, and
> we'll need to rebuild them.
>
> Additionally, this patch also tests that decreasing the size corrupts
> the image since the original data can no longer be recovered. In this
> case QEMU returns an error when trying to read the compressed data,
> but 'qemu-img check' doesn't see anything wrong if the refcounts are
> consistent.
>
> One possible task for the future is to make 'qemu-img check' verify
> the sizes of the compressed clusters, by trying to decompress the data
> and checking that the size stored in the L2 entry is correct.
>
> Signed-off-by: Alberto Garcia <address@hidden>
> ---
> v5: Use 'write -c' instead of 'write' followed by 'convert' [Max]
> Add TODO comment explaining that the size of compressed clusters
> should also be corrected when it's too large in order to avoid
> referencing other unrelated clusters.
>
> v4: Resend for 2.12
>
> v3: Add TODO comment, as suggested by Eric.
>
> Corrupt the length of the second compressed cluster as well so the
> uncompressed data would span three host clusters.
>
> v2: We now have two scenarios where we make QEMU read data from the
> next host cluster and from beyond the end of the image. This
> version also runs qemu-img check on the corrupted image.
>
> If the size field is too small, reading fails but qemu-img check
> succeeds.
>
> If the size field is too large, reading succeeds but qemu-img
> check fails (this can be repaired, though).
> ---
> tests/qemu-iotests/122 | 47
> ++++++++++++++++++++++++++++++++++++++++++++++
> tests/qemu-iotests/122.out | 33 ++++++++++++++++++++++++++++++++
> 2 files changed, 80 insertions(+)
Now without any convert I have no idea what this test case is doing in
122, but, oh well. Thanks, applied to my block branch:
https://github.com/XanClic/qemu/commits/block
Max
signature.asc
Description: OpenPGP digital signature