qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/5] qcow2: multiple clusters write compressed


From: Anton Nefedov
Subject: Re: [Qemu-devel] [PATCH 2/5] qcow2: multiple clusters write compressed
Date: Thu, 23 Nov 2017 12:04:11 +0300
User-agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.4.0



On 21/11/2017 8:42 PM, Kevin Wolf wrote:
Am 15.11.2017 um 17:30 hat Max Reitz geschrieben:
On 2017-11-15 17:28, Anton Nefedov wrote:
On 15/11/2017 6:11 PM, Max Reitz wrote:
On 2017-11-14 11:16, Anton Nefedov wrote:
From: Pavel Butsykin <address@hidden>

At the moment, qcow2_co_pwritev_compressed can process the requests size
less than or equal to one cluster. This patch added possibility to write
compressed data in the QCOW2 more than one cluster. The implementation
is simple, we just split large requests into separate clusters and write
using existing functionality. For unaligned requests we use a workaround
and do write data without compression.

Signed-off-by: Anton Nefedov <address@hidden>
---
   block/qcow2.c | 77
+++++++++++++++++++++++++++++++++++++++++++----------------
   1 file changed, 56 insertions(+), 21 deletions(-)

On one hand, it might be better to do this centrally somewhere in
block/io.c.  On the other, that would require more work because it would
probably mean having to introduce another field in BlockLimits, and it
wouldn't do much -- because qcow (v1) is, well, qcow v1...  And vmdk
seems to completely not care about this single cluster limitation.  So
for now we probably wouldn't even gain anything by doing this in
block/io.c.

So long story short, it's OK to do this locally in qcow2, yes.


[..]

+        qemu_iovec_reset(&hd_qiov);
+        chunk_size = MIN(bytes, s->cluster_size);
+        qemu_iovec_concat(&hd_qiov, qiov, curr_off, chunk_size);
+
+        ret = qcow2_co_pwritev_cluster_compressed(bs, offset +
curr_off,
+                                                  chunk_size,
&hd_qiov);
+        if (ret == -ENOTSUP) {

Why this?  I mean, I can see the appeal, but then we should probably
actually return -ENOTSUP somewhere (e.g. for unaligned clusters and the
like) -- and we should not abort if offset_into_cluster(s, cluster) is
true, but we should write the header uncompressed and compress the main
bulk.

Max


Right, sorry, missed this part when porting the patch.

I think this needs to be removed (and the commit message fixed
accordingly).
Returning an error, rather than silently falling back to uncompressed
seems preferable to me. "Compressing writers" (backup, img convert and
now stream) are aware that they have to cluster-align, and if they fail
to do so that means there is an error somewhere.

OK for me.

If it won't return an error anymore, then the unaligned tail shouldn't
either. So we can end up 'letting' the callers send small unaligned
requests which will never get compressed.

Either way is fine.  It just looks to me like vmdk falls back to
uncompressed writes, so it's not like it would be completely new behavior...

(But I won't judge whether what vmdk does is a good idea.)

Probably not.

If we let io.c know about the cluster-size alignment requirement for
compressed writes, the usual RMW code path could kick in. Wouldn't this
actually be a better solution than uncompressed writes or erroring out?

In fact, with this, we might even be very close to an option that turns
every write into a compressed write, so you get images that stay
compressed even while they are in use.

Kevin


That's alignment, and indeed it would be nice to have that block limit
and I think this patch does not contradict with that (now that in v2 it
doesn't fall back to uncompressed but returns EINVAL).
Unless we also want a max request limit so io.c does the request split?

/Anton



reply via email to

[Prev in Thread] Current Thread [Next in Thread]