+echo
+echo '=== Partial final cluster ==='
+echo
+
+_make_test_img 1024
+$QEMU_IO -f $IMGFMT -C -c 'read 0 1024' "$TEST_IMG" | _filter_qemu_io
Here, $TEST_IMG has no backing file, and does not have the final
cluster allocated; all we have to do is properly read all zeroes.
And write them back, we test that it's allocated afterwards.
Is it worth also explicitly testing reading of allocated data under
COR
I could add a second read of the same area, which would not only test
reading of allocated data, but also test whether the allocating COR
wrote the correct data (all zeros).
Do you think that would be a worthwhile addition?
and/or the case of one image with another backing image (with
same or differing partial cluster sizes), where COR actually has to
write the partial cluster? However, the logic in the code appears to
cover all of those, whether or not the testsuite does as well.
Having a backing file is a different code path for non-zero data, but I
think we already test normal write/write_zeroes extensively. For zero
data, it doesn't make a difference whether it comes from the backing
file or from not having a backing file (as far as the COR logic is
concerned, anyway).
I didn't want to use a backing file because the test is 'generic', but
it already uses qcow2 in other cases, so if we really think this is
important to have, I guess I can have another case with qcow2 over
$IMGFMT and non-zero data.