qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v2 02/12] mc: timestamp migration_bitmap and


From: Michael R. Hines
Subject: Re: [Qemu-devel] [RFC PATCH v2 02/12] mc: timestamp migration_bitmap and KVM logdirty usage
Date: Fri, 04 Apr 2014 11:08:50 +0800
User-agent: Mozilla/5.0 (X11; Linux i686; rv:24.0) Gecko/20100101 Thunderbird/24.4.0

On 03/12/2014 05:31 AM, Juan Quintela wrote:
address@hidden wrote:
From: "Michael R. Hines" <address@hidden>

We also later export these statistics over QMP for better
monitoring of micro-checkpointing as the workload changes.

Signed-off-by: Michael R. Hines <address@hidden>
---
  arch_init.c | 34 ++++++++++++++++++++++++++++------
  1 file changed, 28 insertions(+), 6 deletions(-)

diff --git a/arch_init.c b/arch_init.c
index 80574a0..b8364b0 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -193,6 +193,8 @@ typedef struct AccountingInfo {
      uint64_t skipped_pages;
      uint64_t norm_pages;
      uint64_t iterations;
+    uint64_t log_dirty_time;
+    uint64_t migration_bitmap_time;
      uint64_t xbzrle_bytes;
      uint64_t xbzrle_pages;
      uint64_t xbzrle_cache_miss;
@@ -201,7 +203,7 @@ typedef struct AccountingInfo {
static AccountingInfo acct_info; -static void acct_clear(void)
+void acct_clear(void)
  {
      memset(&acct_info, 0, sizeof(acct_info));
  }
@@ -236,6 +238,16 @@ uint64_t norm_mig_pages_transferred(void)
      return acct_info.norm_pages;
  }
+uint64_t norm_mig_log_dirty_time(void)
+{
+    return acct_info.log_dirty_time;
+}
+
+uint64_t norm_mig_bitmap_time(void)
+{
+    return acct_info.migration_bitmap_time;
+}
+
  uint64_t xbzrle_mig_bytes_transferred(void)
  {
      return acct_info.xbzrle_bytes;
@@ -426,27 +438,35 @@ static void migration_bitmap_sync(void)
      static int64_t num_dirty_pages_period;
      int64_t end_time;
      int64_t bytes_xfer_now;
+    int64_t begin_time;
+    int64_t dirty_time;
if (!bytes_xfer_prev) {
          bytes_xfer_prev = ram_bytes_transferred();
      }
+ begin_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
      if (!start_time) {
          start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
      }
      if (!start_time) {
          start_time = begin_time;
      }

Althought I think we need to search for better names?

start_time --> migration_start_time
begin_time --> iteration_start_time
?

Will do. These new names are fine - no problem =)

I am open to better names.

@@ -548,9 +568,11 @@ static int ram_save_block(QEMUFile *f, bool last_stage)
              /* XBZRLE overflow or normal page */
              if (bytes_sent == -1) {
                  bytes_sent = save_block_hdr(f, block, offset, cont, 
RAM_SAVE_FLAG_PAGE);
-                qemu_put_buffer_async(f, p, TARGET_PAGE_SIZE);
-                bytes_sent += TARGET_PAGE_SIZE;
-                acct_info.norm_pages++;
+                if (ret != RAM_SAVE_CONTROL_DELAYED) {
+                    qemu_put_buffer_async(f, p, TARGET_PAGE_SIZE);
+                    bytes_sent += TARGET_PAGE_SIZE;
+                    acct_info.norm_pages++;
+                }
              }
/* if page is unmodified, continue to the next */
Except for this bit, rest of the patch ok.



The goal of this patch is to allow the virtual machine to resume execution of the
main VCPUs "as soon as possible" after each checkpoint completes.
In order to make that possible, all the other micro-checkpointing implementations
use a "staging" buffer for this to work:

The purpose of the staging buffer is to hold a complete copy of the dirty memory locally and capture that memory *before* transmitting it to the other side. Once we have a complete copy of the dirty memory, we can allow the virtual machine to continue execution immediately without waiting for the memory to be transmitted
to the other side of the connection.

Since this patch is very critical to performance, I'll make it a separate patch with
it's own summary in the series.

- Michael




reply via email to

[Prev in Thread] Current Thread [Next in Thread]