qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 7/8] block/qcow2: Speed up zero cluster expansio


From: Max Reitz
Subject: Re: [Qemu-devel] [PATCH 7/8] block/qcow2: Speed up zero cluster expansion
Date: Wed, 30 Jul 2014 22:31:18 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.0

On 30.07.2014 18:14, Eric Blake wrote:
On 07/25/2014 12:07 PM, Max Reitz wrote:
Actually, we do not need to allocate a new data cluster for every zero
cluster to be expanded: It is completely sufficient to rely on qcow2's
COW part and instead create a single zero cluster and reuse it as much
as possible.

Signed-off-by: Max Reitz <address@hidden>
---
  block/qcow2-cluster.c | 119 ++++++++++++++++++++++++++++++++++++++------------
  1 file changed, 92 insertions(+), 27 deletions(-)

diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c
index 905beb6..867db03 100644
--- a/block/qcow2-cluster.c
+++ b/block/qcow2-cluster.c
@@ -1558,6 +1558,9 @@ static int expand_zero_clusters_in_l1(BlockDriverState 
*bs, uint64_t *l1_table,
      BDRVQcowState *s = bs->opaque;
      bool is_active_l1 = (l1_table == s->l1_table);
      uint64_t *l2_table = NULL;
+    int64_t zeroed_cluster_offset = 0;
+    int zeroed_cluster_refcount = 0;
+    int last_zeroed_cluster_l1i = 0, last_zeroed_cluster_l2i = 0;
      int ret;
      int i, j;
@@ -1617,47 +1620,79 @@ static int expand_zero_clusters_in_l1(BlockDriverState *bs, uint64_t *l1_table,
                      continue;
                  }
- offset = qcow2_alloc_clusters(bs, s->cluster_size);
-                if (offset < 0) {
-                    ret = offset;
-                    goto fail;
+                if (zeroed_cluster_offset) {
+                    zeroed_cluster_refcount += l2_refcount;
+                    if (zeroed_cluster_refcount > 0xffff) {
Doesn't the qcow2 file format allow variable-sized maximum refcount
(bytes 96-99 refcount_order in the header)?  Therefore, you should be
using the value computed from the header rather than hard-coding the
assumption that the header used (the default of) 16-bit refcount.

Oh, you're right, I didn't even think of that.

[Yeah,
I know, we don't yet have code that supports non-default size, even
though the file format documents it, but that doesn't mean we should
make it harder to add support down the road...]

And this is probably why, yes. ;-)

+                        zeroed_cluster_refcount = 0;
+                        zeroed_cluster_offset = 0;
+                    }
                  }
This isn't a maximal packing.  As long as we don't mind complexity to
gain compactness, couldn't we also expand the existing
zeroed_cluster_offset all the way up to full refcount, and decrement
l2_refcount by the difference, before spilling over to allocating the
next zero cluster?

Hm, right.

Also, I have to wonder - since the all-zero cluster is the most likely
cluster to have a large refcount, even during normal runtime, should we
special case the normal qcow2 write code to track the current all-zero
cluster (if any), and merely increase its refcount rather than allocate
a new cluster any time it is detected that an all-zero cluster is
needed?  [Of course, the tracking would be runtime only, since
compat=0.10 header doesn't provide any way to track the location of an
all-zero cluster across file reloads.  Each new runtime would probably
settle on a new location for the all-zero cluster used during that run,
rather than trying to find an existing one.  And there's really no point
to adding a header to track an all-zero cluster in compat=1.1 images,
since those images already have the ability to track zero clusters
without needing one allocated.]

This may improve performance for compat=0.10 images; however, I don't think we should care that much about compat=0.10 images to justify optimizations specifically for those all over the qcow2 code.

+                if (!zeroed_cluster_offset) {
+                    offset = qcow2_alloc_clusters(bs, s->cluster_size);
+                    if (offset < 0) {
+                        ret = offset;
+                        goto fail;
+                    }
- if (l2_refcount > 1) {
-                    /* For shared L2 tables, set the refcount accordingly (it 
is
-                     * already 1 and needs to be l2_refcount) */
-                    ret = qcow2_update_cluster_refcount(bs,
-                            offset >> s->cluster_bits, l2_refcount - 1,
-                            QCOW2_DISCARD_OTHER);
+                    ret = qcow2_pre_write_overlap_check(bs, 0, offset,
+                                                        s->cluster_size);
+                    if (ret < 0) {
+                        qcow2_free_clusters(bs, offset, s->cluster_size,
+                                            QCOW2_DISCARD_OTHER);
+                        goto fail;
+                    }
+
+                    ret = bdrv_write_zeroes(bs->file, offset / 
BDRV_SECTOR_SIZE,
+                                            s->cluster_sectors, 0);
That is, if bdrv_write_zeroes knows how to take advantage of an already
existing all-zero cluster, it would be less special casing in this code,
but still get the same benefits of maximizing refcount during the amend
operation, if all expanded clusters go through bdrv_write_zeroes.

The special casing would then be somewhere else and I think only this code would really benefit from it. I don't think we should ingrain such optimizations in all of the qcow2 code; I personally can live with having these optimizations contained and separated from the rest of the qcow2 code in functions like this one, but anything else would be a bit too much effort (and maybe even too error-prone, since it would probably be rather complex) for current qemu. We do have compat=1.1 with zero clusters and that is what should be used for performance.

I'll wait with v2 of this patch until you have decided on whether it's worth it (in the alternative series, I completely dropped this patch).

Max



reply via email to

[Prev in Thread] Current Thread [Next in Thread]