qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH COLO-Frame v15 29/38] COLO: Separate the process


From: Hailiang Zhang
Subject: Re: [Qemu-devel] [PATCH COLO-Frame v15 29/38] COLO: Separate the process of saving/loading ram and device state
Date: Sat, 27 Feb 2016 18:03:47 +0800
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1

On 2016/2/26 21:16, Dr. David Alan Gilbert wrote:
* zhanghailiang (address@hidden) wrote:
We separate the process of saving/loading ram and device state when do
checkpoint, we add new helpers for save/load ram/device. With this change,
we can directly transfer ram from master to slave without using
QEMUSizeBufferas as assistant, which also reduce the size of extra memory
been used during checkpoint.

Besides, we move the colo_flush_ram_cache to the proper position after the
above change.

Signed-off-by: zhanghailiang <address@hidden>
Signed-off-by: Li Zhijian <address@hidden>
---
v14:
- split two new patches from this patch
- Some minor fixes from Dave
v13:
- Re-use some existed helper functions to realize saving/loading
   ram and device.
v11:
- Remove load configuration section in qemu_loadvm_state_begin()
---
  migration/colo.c   | 48 ++++++++++++++++++++++++++++++++++++++----------
  migration/ram.c    |  5 -----
  migration/savevm.c |  5 +++++
  3 files changed, 43 insertions(+), 15 deletions(-)

diff --git a/migration/colo.c b/migration/colo.c
index 16bada6..300fa54 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -288,21 +288,37 @@ static int colo_do_checkpoint_transaction(MigrationState 
*s,
          goto out;
      }

+    colo_put_cmd(s->to_dst_file, COLO_MESSAGE_VMSTATE_SEND, &local_err);
+    if (local_err) {
+        goto out;
+    }
+
      /* Disable block migration */
      s->params.blk = 0;
      s->params.shared = 0;
-    qemu_savevm_state_header(trans);
-    qemu_savevm_state_begin(trans, &s->params);
+    qemu_savevm_state_begin(s->to_dst_file, &s->params);
+    ret = qemu_file_get_error(s->to_dst_file);
+    if (ret < 0) {
+        error_report("Save vm state begin error");
+        goto out;
+    }
+
      qemu_mutex_lock_iothread();
-    qemu_savevm_state_complete_precopy(trans, false);
+    /*
+    * Only save VM's live state, which not including device state.
+    * TODO: We may need a timeout mechanism to prevent COLO process
+    * to be blocked here.
+    */
+    qemu_savevm_live_state(s->to_dst_file);
+    /* Note: device state is saved into buffer */
+    ret = qemu_save_device_state(trans);
      qemu_mutex_unlock_iothread();

Yes, I still worry a little about what can hang under that lock, but I think

Hmm, we have some other places in COLO taking this lock too. Some of them are
OK with holding it. But holding it while sending/receiving date from other side
in COLO is dangerous. One solution is to apply timeout for QEMUFile operation,
But it is not a good choice.

it's the best we've got at the moment; we probably need to understand what
the rules are about what actually needs the lock!


Yes, this is another way to solve the problem. Don't hold the lock while
sending/receiving date.
For qemu_savevm_live_state() here, IMHO, we can send the remained RAM data
without holding the global lock. Here we hold the lock, because we
call cpu_synchronize_all_states() in it. (I'm not sure ...)

Thanks,
Hailiang

Other than that,

Reviewed-by: Dr. David Alan Gilbert <address@hidden>

-
-    qemu_fflush(trans);
-
-    colo_put_cmd(s->to_dst_file, COLO_MESSAGE_VMSTATE_SEND, &local_err);
-    if (local_err) {
+    if (ret < 0) {
+        error_report("Save device state error");
          goto out;
      }
+    qemu_fflush(trans);
+
      /* we send the total size of the vmstate first */
      size = qsb_get_length(buffer);
      colo_put_cmd_value(s->to_dst_file, COLO_MESSAGE_VMSTATE_SIZE,
@@ -573,6 +589,16 @@ void *colo_process_incoming_thread(void *opaque)
              goto out;
          }

+        ret = qemu_loadvm_state_begin(mis->from_src_file);
+        if (ret < 0) {
+            error_report("Load vm state begin error, ret=%d", ret);
+            goto out;
+        }
+        ret = qemu_loadvm_state_main(mis->from_src_file, mis);
+        if (ret < 0) {
+            error_report("Load VM's live state (ram) error");
+            goto out;
+        }
          /* read the VM state total size first */
          value = colo_get_cmd_value(mis->from_src_file,
                                   COLO_MESSAGE_VMSTATE_SIZE, &local_err);
@@ -605,8 +631,10 @@ void *colo_process_incoming_thread(void *opaque)
          qemu_mutex_lock_iothread();
          qemu_system_reset(VMRESET_SILENT);
          vmstate_loading = true;
-        if (qemu_loadvm_state(fb) < 0) {
-            error_report("COLO: loadvm failed");
+        colo_flush_ram_cache();
+        ret = qemu_load_device_state(fb);
+        if (ret < 0) {
+            error_report("COLO: load device state failed");
              qemu_mutex_unlock_iothread();
              goto out;
          }
diff --git a/migration/ram.c b/migration/ram.c
index 891f3b2..8f416d5 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2465,7 +2465,6 @@ static int ram_load(QEMUFile *f, void *opaque, int 
version_id)
       * be atomic
       */
      bool postcopy_running = postcopy_state_get() >= 
POSTCOPY_INCOMING_LISTENING;
-    bool need_flush = false;

      seq_iter++;

@@ -2500,7 +2499,6 @@ static int ram_load(QEMUFile *f, void *opaque, int 
version_id)
              /* After going into COLO, we should load the Page into colo_cache 
*/
              if (ram_cache_enable) {
                  host = colo_cache_from_block_offset(block, addr);
-                need_flush = true;
              } else {
                  host = host_from_ram_block_offset(block, addr);
              }
@@ -2594,9 +2592,6 @@ static int ram_load(QEMUFile *f, void *opaque, int 
version_id)

      rcu_read_unlock();

-    if (!ret  && ram_cache_enable && need_flush) {
-        colo_flush_ram_cache();
-    }
      DPRINTF("Completed load of VM with exit code %d seq iteration "
              "%" PRIu64 "\n", ret, seq_iter);
      return ret;
diff --git a/migration/savevm.c b/migration/savevm.c
index 60c7b57..1551fbb 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -50,6 +50,7 @@
  #include "qemu/iov.h"
  #include "block/snapshot.h"
  #include "block/qapi.h"
+#include "migration/colo.h"


  #ifndef ETH_P_RARP
@@ -923,6 +924,10 @@ void qemu_savevm_state_begin(QEMUFile *f,
              break;
          }
      }
+    if (migration_in_colo_state()) {
+        qemu_put_byte(f, QEMU_VM_EOF);
+        qemu_fflush(f);
+    }
  }

  /*
--
1.8.3.1


--
Dr. David Alan Gilbert / address@hidden / Manchester, UK

.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]