qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH] drive-mirror: Change the amount of data base on gra


From: Chentao (Boby)
Subject: [Qemu-devel] [PATCH] drive-mirror: Change the amount of data base on granularity
Date: Sat, 18 Jan 2014 08:09:43 +0000

Before, one iteration send the amount of data is continuous dirty block, maximum is mirror buffer size(default is 10M).

This way has a low write/read performance. If image type is raw, first loop, all the data is dirty.

One iteration, read 10M data and then write 10M data to target image, so read and write cannot be parallelized.

 

Now, I change the amount of data in an iteration, it base on granularity. We can set the granularity to 1M,so it can send

10 times read request, and then send write request. Once a write request is done, it will have 1M free buffer to send next read request.

So this way can allow read/write to be parallelized.

 

This change can improve read and write performance.

On my server:

(write) MBps:55MB/S --> 90 MB/S utility:50%->85%

 

Signed-off-by: Zhang Min <address@hidden>

---

block/mirror.c |   68 ++++++++++++++++++++++---------------------------------

1 files changed, 27 insertions(+), 41 deletions(-)

 

diff --git a/block/mirror.c b/block/mirror.c index 2932bab..1ba2862 100644

--- a/block/mirror.c

+++ b/block/mirror.c

@@ -183,54 +183,40 @@ static void coroutine_fn mirror_iteration(MirrorBlockJob *s)

         qemu_coroutine_yield();

     }

 

-    do {

-        int added_sectors, added_chunks;

+    int added_sectors, added_chunks;

 

-        if (!bdrv_get_dirty(source, s->dirty_bitmap, next_sector) ||

-            test_bit(next_chunk, s->in_flight_bitmap)) {

-            assert(nb_sectors > 0);

-            break;

-        }

+    added_sectors = sectors_per_chunk;

+    if (s->cow_bitmap && !test_bit(next_chunk, s->cow_bitmap)) {

+        bdrv_round_to_clusters(s->target,

+                next_sector, added_sectors,

+                &next_sector, &added_sectors);

 

-        added_sectors = sectors_per_chunk;

-        if (s->cow_bitmap && !test_bit(next_chunk, s->cow_bitmap)) {

-            bdrv_round_to_clusters(s->target,

-                                   next_sector, added_sectors,

-                                   &next_sector, &added_sectors);

-

-            /* On the first iteration, the rounding may make us copy

-             * sectors before the first dirty one.

-             */

-            if (next_sector < sector_num) {

-                assert(nb_sectors == 0);

-                sector_num = next_sector;

-                next_chunk = next_sector / sectors_per_chunk;

-            }

+        /* On the first iteration, the rounding may make us copy

+         * sectors before the first dirty one.

+         */

+        if (next_sector < sector_num) {

+            assert(nb_sectors == 0);

+            sector_num = next_sector;

+            next_chunk = next_sector / sectors_per_chunk;

         }

+    }

 

-        added_sectors = MIN(added_sectors, end - (sector_num + nb_sectors));

-        added_chunks = (added_sectors + sectors_per_chunk - 1) / sectors_per_chunk;

+    added_sectors = MIN(added_sectors, end - (sector_num + nb_sectors));

+    added_chunks = (added_sectors + sectors_per_chunk - 1) /

+ sectors_per_chunk;

 

-        /* When doing COW, it may happen that there is not enough space for

-         * a full cluster.  Wait if that is the case.

-         */

-        while (nb_chunks == 0 && s->buf_free_count < added_chunks) {

-            trace_mirror_yield_buf_busy(s, nb_chunks, s->in_flight);

-            qemu_coroutine_yield();

-        }

-        if (s->buf_free_count < nb_chunks + added_chunks) {

-            trace_mirror_break_buf_busy(s, nb_chunks, s->in_flight);

-            break;

-        }

+    /* When doing COW, it may happen that there is not enough space for

+     * a full cluster.  Wait if that is the case.

+     */

+    while (nb_chunks == 0 && s->buf_free_count < added_chunks) {

+        trace_mirror_yield_buf_busy(s, nb_chunks, s->in_flight);

+        qemu_coroutine_yield();

+    }

 

-        /* We have enough free space to copy these sectors.  */

-        bitmap_set(s->in_flight_bitmap, next_chunk, added_chunks);

+    /* We have enough free space to copy these sectors.  */

+    bitmap_set(s->in_flight_bitmap, next_chunk, added_chunks);

 

-        nb_sectors += added_sectors;

-        nb_chunks += added_chunks;

-        next_sector += added_sectors;

-        next_chunk += added_chunks;

-    } while (next_sector < end);

+    nb_sectors += added_sectors;

+    nb_chunks += added_chunks;

 

     /* Allocate a MirrorOp that is used as an AIO callback.  */

     op = g_slice_new(MirrorOp);

--

 

Zhang Min

 


reply via email to

[Prev in Thread] Current Thread [Next in Thread]