qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/7] block/block-copy: use block_status


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [PATCH v2 2/7] block/block-copy: use block_status
Date: Sat, 8 Feb 2020 15:25:56 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1

07.02.2020 20:46, Max Reitz wrote:
On 27.11.19 19:08, Vladimir Sementsov-Ogievskiy wrote:
Use bdrv_block_status_above to chose effective chunk size and to handle
zeroes effectively.

This substitutes checking for just being allocated or not, and drops
old code path for it. Assistance by backup job is dropped too, as
caching block-status information is more difficult than just caching
is-allocated information in our dirty bitmap, and backup job is not
good place for this caching anyway.

Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
---
  block/block-copy.c | 67 +++++++++++++++++++++++++++++++++++++---------
  block/trace-events |  1 +
  2 files changed, 55 insertions(+), 13 deletions(-)

diff --git a/block/block-copy.c b/block/block-copy.c
index 8602e2cae7..74295d93d5 100644
--- a/block/block-copy.c
+++ b/block/block-copy.c

[...]

@@ -336,23 +375,25 @@ int coroutine_fn block_copy(BlockCopyState *s,
              chunk_end = next_zero;
          }
- if (s->skip_unallocated) {
-            ret = block_copy_reset_unallocated(s, start, &status_bytes);
-            if (ret == 0) {
-                trace_block_copy_skip_range(s, start, status_bytes);
-                start += status_bytes;
-                continue;
-            }
-            /* Clamp to known allocated region */
-            chunk_end = MIN(chunk_end, start + status_bytes);
+        ret = block_copy_block_status(s, start, chunk_end - start,
+                                      &status_bytes);
+        if (s->skip_unallocated && !(ret & BDRV_BLOCK_ALLOCATED)) {
+            bdrv_reset_dirty_bitmap(s->copy_bitmap, start, status_bytes);
+            s->progress_reset_callback(s->progress_opaque);
+            trace_block_copy_skip_range(s, start, status_bytes);
+            start += status_bytes;
+            continue;
          }
+ chunk_end = MIN(chunk_end, start + status_bytes);

I’m not sure how much the old “Clamp to known allocated region” actually
helps, but I wouldn’t drop it anyway.

But it may be not allocated now. We just clamp to status_bytes.
It's "known allocated" only if s->skip_unallocated is true.


Apart from this nit, the patch looks good to me.

Max



--
Best regards,
Vladimir



reply via email to

[Prev in Thread] Current Thread [Next in Thread]