qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bul


From: Hailiang Zhang
Subject: Re: [Qemu-devel] [PATCH] migration: not send zero page header in ram bulk stage
Date: Fri, 15 Jan 2016 18:17:25 +0800
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.4.0

On 2016/1/15 17:48, Liang Li wrote:
Now that VM's RAM pages are initialized to zero, (VM's RAM is allcated
with the mmap() and MAP_ANONYMOUS option, or mmap() without MAP_SHARED
if hugetlbfs is used.) so there is no need to send the zero page header
to destination.


It seems that this patch is incorrect, if the no-zero pages are zeroed again
during !ram_bulk_stage, we didn't send the new zeroed page, there will be an 
error.

For guest just uses a small portions of RAM, this change can avoid
allocating all the guest's RAM pages in the destination node after
live migration. Another benefit is destination QEMU can save lots of
CPU cycles for zero page checking.

Signed-off-by: Liang Li <address@hidden>
---
  migration/ram.c | 10 ++++++----
  1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 4e606ab..c4821d1 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -705,10 +705,12 @@ static int save_zero_page(QEMUFile *f, RAMBlock *block, 
ram_addr_t offset,

      if (is_zero_range(p, TARGET_PAGE_SIZE)) {
          acct_info.dup_pages++;
-        *bytes_transferred += save_page_header(f, block,
-                                               offset | 
RAM_SAVE_FLAG_COMPRESS);
-        qemu_put_byte(f, 0);
-        *bytes_transferred += 1;
+        if (!ram_bulk_stage) {
+            *bytes_transferred += save_page_header(f, block, offset |
+                                                   RAM_SAVE_FLAG_COMPRESS);
+            qemu_put_byte(f, 0);
+            *bytes_transferred += 1;
+        }
          pages = 1;
      }







reply via email to

[Prev in Thread] Current Thread [Next in Thread]