[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 13/19] block/block-copy: specialcase first copy_range request
From: |
Max Reitz |
Subject: |
[PULL 13/19] block/block-copy: specialcase first copy_range request |
Date: |
Wed, 11 Mar 2020 14:52:07 +0100 |
From: Vladimir Sementsov-Ogievskiy <address@hidden>
In block_copy_do_copy we fallback to read+write if copy_range failed.
In this case copy_size is larger than defined for buffered IO, and
there is corresponding commit. Still, backup copies data cluster by
cluster, and most of requests are limited to one cluster anyway, so the
only source of this one bad-limited request is copy-before-write
operation.
Further patch will move backup to use block_copy directly, than for
cases where copy_range is not supported, first request will be
oversized in each backup. It's not good, let's change it now.
Fix is simple: just limit first copy_range request like buffer-based
request. If it succeed, set larger copy_range limit.
Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
Reviewed-by: Andrey Shinkevich <address@hidden>
Reviewed-by: Max Reitz <address@hidden>
Message-Id: <address@hidden>
Signed-off-by: Max Reitz <address@hidden>
---
block/block-copy.c | 41 +++++++++++++++++++++++++++++++----------
1 file changed, 31 insertions(+), 10 deletions(-)
diff --git a/block/block-copy.c b/block/block-copy.c
index e2d7b3b887..ddd61c1652 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c
@@ -70,16 +70,19 @@ void block_copy_state_free(BlockCopyState *s)
g_free(s);
}
+static uint32_t block_copy_max_transfer(BdrvChild *source, BdrvChild *target)
+{
+ return MIN_NON_ZERO(INT_MAX,
+ MIN_NON_ZERO(source->bs->bl.max_transfer,
+ target->bs->bl.max_transfer));
+}
+
BlockCopyState *block_copy_state_new(BdrvChild *source, BdrvChild *target,
int64_t cluster_size,
BdrvRequestFlags write_flags, Error
**errp)
{
BlockCopyState *s;
BdrvDirtyBitmap *copy_bitmap;
- uint32_t max_transfer =
- MIN_NON_ZERO(INT_MAX,
- MIN_NON_ZERO(source->bs->bl.max_transfer,
- target->bs->bl.max_transfer));
copy_bitmap = bdrv_create_dirty_bitmap(source->bs, cluster_size, NULL,
errp);
@@ -99,7 +102,7 @@ BlockCopyState *block_copy_state_new(BdrvChild *source,
BdrvChild *target,
.mem = shres_create(BLOCK_COPY_MAX_MEM),
};
- if (max_transfer < cluster_size) {
+ if (block_copy_max_transfer(source, target) < cluster_size) {
/*
* copy_range does not respect max_transfer. We don't want to bother
* with requests smaller than block-copy cluster size, so fallback to
@@ -114,12 +117,11 @@ BlockCopyState *block_copy_state_new(BdrvChild *source,
BdrvChild *target,
s->copy_size = cluster_size;
} else {
/*
- * copy_range does not respect max_transfer (it's a TODO), so we factor
- * that in here.
+ * We enable copy-range, but keep small copy_size, until first
+ * successful copy_range (look at block_copy_do_copy).
*/
s->use_copy_range = true;
- s->copy_size = MIN(MAX(cluster_size, BLOCK_COPY_MAX_COPY_RANGE),
- QEMU_ALIGN_DOWN(max_transfer, cluster_size));
+ s->copy_size = MAX(s->cluster_size, BLOCK_COPY_MAX_BUFFER);
}
QLIST_INIT(&s->inflight_reqs);
@@ -172,6 +174,22 @@ static int coroutine_fn block_copy_do_copy(BlockCopyState
*s,
s->copy_size = MAX(s->cluster_size, BLOCK_COPY_MAX_BUFFER);
/* Fallback to read+write with allocated buffer */
} else {
+ if (s->use_copy_range) {
+ /*
+ * Successful copy-range. Now increase copy_size. copy_range
+ * does not respect max_transfer (it's a TODO), so we factor
+ * that in here.
+ *
+ * Note: we double-check s->use_copy_range for the case when
+ * parallel block-copy request unsets it during previous
+ * bdrv_co_copy_range call.
+ */
+ s->copy_size =
+ MIN(MAX(s->cluster_size, BLOCK_COPY_MAX_COPY_RANGE),
+ QEMU_ALIGN_DOWN(block_copy_max_transfer(s->source,
+ s->target),
+ s->cluster_size));
+ }
goto out;
}
}
@@ -179,7 +197,10 @@ static int coroutine_fn block_copy_do_copy(BlockCopyState
*s,
/*
* In case of failed copy_range request above, we may proceed with buffered
* request larger than BLOCK_COPY_MAX_BUFFER. Still, further requests will
- * be properly limited, so don't care too much.
+ * be properly limited, so don't care too much. Moreover the most likely
+ * case (copy_range is unsupported for the configuration, so the very first
+ * copy_range request fails) is handled by setting large copy_size only
+ * after first successful copy_range.
*/
bounce_buffer = qemu_blockalign(s->source->bs, nbytes);
--
2.24.1
- [PULL 05/19] block/curl: HTTP header fields allow whitespace around values, (continued)
- [PULL 05/19] block/curl: HTTP header fields allow whitespace around values, Max Reitz, 2020/03/11
- [PULL 04/19] iotests: add 288 luks qemu-img measure test, Max Reitz, 2020/03/11
- [PULL 06/19] block/curl: HTTP header field names are case insensitive, Max Reitz, 2020/03/11
- [PULL 09/19] qemu-img: free memory before re-assign, Max Reitz, 2020/03/11
- [PULL 08/19] block/qcow2: do free crypto_opts in qcow2_close(), Max Reitz, 2020/03/11
- [PULL 03/19] qemu-img: allow qemu-img measure --object without a filename, Max Reitz, 2020/03/11
- [PULL 10/19] block/qcow2-threads: fix qcow2_decompress, Max Reitz, 2020/03/11
- [PULL 07/19] iotests: Fix nonportable use of od --endian, Max Reitz, 2020/03/11
- [PULL 11/19] job: refactor progress to separate object, Max Reitz, 2020/03/11
- [PULL 12/19] block/block-copy: fix progress calculation, Max Reitz, 2020/03/11
- [PULL 13/19] block/block-copy: specialcase first copy_range request,
Max Reitz <=
- [PULL 14/19] block/block-copy: use block_status, Max Reitz, 2020/03/11
- [PULL 15/19] block/block-copy: factor out find_conflicting_inflight_req, Max Reitz, 2020/03/11
- [PULL 16/19] block/block-copy: refactor interfaces to use bytes instead of end, Max Reitz, 2020/03/11
- [PULL 17/19] block/block-copy: rename start to offset in interfaces, Max Reitz, 2020/03/11
- [PULL 18/19] block/block-copy: reduce intersecting request lock, Max Reitz, 2020/03/11
- [PULL 19/19] block/block-copy: hide structure definitions, Max Reitz, 2020/03/11
- Re: [PULL 00/19] Block patches, Peter Maydell, 2020/03/12