qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [RFC PATCH] block: optimize zero writes with bdrv_write_zer


From: Peter Lieven
Subject: [Qemu-devel] [RFC PATCH] block: optimize zero writes with bdrv_write_zeroes
Date: Sat, 22 Feb 2014 14:00:22 +0100

this patch tries to optimize zero write requests
by automatically using bdrv_write_zeroes if it is
supported by the format.

i know that there is a lot of potential for discussion, but i would
like to know what the others think.

this should significantly speed up file system initialization and
should speed zero write test used to test backend storage performance.

the difference can simply be tested by e.g.

dd if=/dev/zero of=/dev/vdX bs=1M

Signed-off-by: Peter Lieven <address@hidden>
---
 block.c               |    8 ++++++++
 include/qemu-common.h |    1 +
 util/iov.c            |   20 ++++++++++++++++++++
 3 files changed, 29 insertions(+)

diff --git a/block.c b/block.c
index 6f4baca..505888e 100644
--- a/block.c
+++ b/block.c
@@ -3145,6 +3145,14 @@ static int coroutine_fn 
bdrv_aligned_pwritev(BlockDriverState *bs,
 
     ret = notifier_with_return_list_notify(&bs->before_write_notifiers, req);
 
+    if (!ret && !(flags & BDRV_REQ_ZERO_WRITE) &&
+        drv->bdrv_co_write_zeroes && qemu_iovec_is_zero(qiov)) {
+        flags |= BDRV_REQ_ZERO_WRITE;
+        /* if the device was not opened with discard=on the below flag
+         * is immediately cleared again in bdrv_co_do_write_zeroes */
+        flags |= BDRV_REQ_MAY_UNMAP;
+    }
+
     if (ret < 0) {
         /* Do nothing, write notifier decided to fail this request */
     } else if (flags & BDRV_REQ_ZERO_WRITE) {
diff --git a/include/qemu-common.h b/include/qemu-common.h
index b0e34b2..f0ad0f9 100644
--- a/include/qemu-common.h
+++ b/include/qemu-common.h
@@ -330,6 +330,7 @@ void qemu_iovec_concat(QEMUIOVector *dst,
 void qemu_iovec_concat_iov(QEMUIOVector *dst,
                            struct iovec *src_iov, unsigned int src_cnt,
                            size_t soffset, size_t sbytes);
+bool qemu_iovec_is_zero(QEMUIOVector *qiov);
 void qemu_iovec_destroy(QEMUIOVector *qiov);
 void qemu_iovec_reset(QEMUIOVector *qiov);
 size_t qemu_iovec_to_buf(QEMUIOVector *qiov, size_t offset,
diff --git a/util/iov.c b/util/iov.c
index bb46c04..abbb374 100644
--- a/util/iov.c
+++ b/util/iov.c
@@ -342,6 +342,26 @@ void qemu_iovec_concat(QEMUIOVector *dst,
     qemu_iovec_concat_iov(dst, src->iov, src->niov, soffset, sbytes);
 }
 
+/*
+ *  check if the contents of the iovecs is all zero 
+ */
+bool qemu_iovec_is_zero(QEMUIOVector *qiov) {
+    int i;
+    for (i = 0; i < qiov->niov; i++) {
+        size_t offs = qiov->iov[i].iov_len & ~(4 * sizeof(long) - 1);
+        uint8_t *ptr = qiov->iov[i].iov_base;
+        if (offs && !buffer_is_zero(qiov->iov[i].iov_base, offs)) {
+            return false;
+        }
+        for (; offs < qiov->iov[i].iov_len; offs++) {
+             if (ptr[offs]) {
+                 return false;
+             }
+        }
+    }
+    return true;
+}
+
 void qemu_iovec_destroy(QEMUIOVector *qiov)
 {
     assert(qiov->nalloc != -1);
-- 
1.7.9.5




reply via email to

[Prev in Thread] Current Thread [Next in Thread]