qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RESEND V3 4/6] migration: add postcopy downtime


From: Alexey Perevalov
Subject: Re: [Qemu-devel] [PATCH RESEND V3 4/6] migration: add postcopy downtime into MigrationIncommingState
Date: Fri, 28 Apr 2017 13:03:45 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0

On 04/28/2017 12:38 PM, Peter Xu wrote:
On Fri, Apr 28, 2017 at 09:57:36AM +0300, Alexey Perevalov wrote:
This patch add request to kernel space for UFFD_FEATURE_THREAD_ID,
in case when this feature is provided by kernel.

DowntimeContext is incapsulated inside migration.c.

Signed-off-by: Alexey Perevalov <address@hidden>
---
  include/migration/migration.h | 12 ++++++++++++
  migration/migration.c         | 33 +++++++++++++++++++++++++++++++++
  migration/postcopy-ram.c      |  8 ++++++++
  3 files changed, 53 insertions(+)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index ba1a16c..e8fb68f 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -83,6 +83,8 @@ typedef enum {
      POSTCOPY_INCOMING_END
  } PostcopyState;
+struct DowntimeContext;
Nit: shall we embed something like "Postcopy" (or short form) into
this struct name? Since the whole thing is really tailored for
postcopy, only.
Yes, that is postcopy only structure, so maybe PostcopyDowntimeContext
is more readable.


+
  /* State for the incoming migration */
  struct MigrationIncomingState {
      QEMUFile *from_src_file;
@@ -123,10 +125,20 @@ struct MigrationIncomingState {
/* See savevm.c */
      LoadStateEntry_Head loadvm_handlers;
+
+    /*
+     * DowntimeContext to keep information for postcopy
+     * live migration, to calculate downtime
+     * */
+    struct DowntimeContext *downtime_ctx;
  };
MigrationIncomingState *migration_incoming_get_current(void);
  void migration_incoming_state_destroy(void);
+/*
+ * Functions to work with downtime context
+ */
+struct DowntimeContext *downtime_context_new(void);
struct MigrationState
  {
diff --git a/migration/migration.c b/migration/migration.c
index 569a7f6..ec76e5c 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -77,6 +77,18 @@ static NotifierList migration_state_notifiers =
static bool deferred_incoming; +typedef struct DowntimeContext {
+    /* time when page fault initiated per vCPU */
+    int64_t *page_fault_vcpu_time;
+    /* page address per vCPU */
+    uint64_t *vcpu_addr;
+    int64_t total_downtime;
+    /* downtime per vCPU */
+    int64_t *vcpu_downtime;
+    /* point in time when last page fault was initiated */
+    int64_t last_begin;
+} DowntimeContext;
+
  /*
   * Current state of incoming postcopy; note this is not part of
   * MigrationIncomingState since it's state is used during cleanup
@@ -116,6 +128,23 @@ MigrationState *migrate_get_current(void)
      return &current_migration;
  }
+struct DowntimeContext *downtime_context_new(void)
+{
+    DowntimeContext *ctx = g_new0(DowntimeContext, 1);
+    ctx->page_fault_vcpu_time = g_new0(int64_t, smp_cpus);
+    ctx->vcpu_addr = g_new0(uint64_t, smp_cpus);
+    ctx->vcpu_downtime = g_new0(int64_t, smp_cpus);
+    return ctx;
+}
+
+static void destroy_downtime_context(struct DowntimeContext *ctx)
+{
+    g_free(ctx->page_fault_vcpu_time);
+    g_free(ctx->vcpu_addr);
+    g_free(ctx->vcpu_downtime);
+    g_free(ctx);
+}
+
  MigrationIncomingState *migration_incoming_get_current(void)
  {
      static bool once;
@@ -138,6 +167,10 @@ void migration_incoming_state_destroy(void)
qemu_event_destroy(&mis->main_thread_load_event);
      loadvm_free_handlers(mis);
+    if (mis->downtime_ctx) {
+        destroy_downtime_context(mis->downtime_ctx);
+        mis->downtime_ctx = NULL;
+    }
  }
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index 21e7150..f3688f5 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -132,6 +132,14 @@ static bool ufd_version_check(int ufd, 
MigrationIncomingState *mis)
          return false;
      }
+#ifdef UFFD_FEATURE_THREAD_ID
+    if (mis && UFFD_FEATURE_THREAD_ID & supported_features) {
+        /* kernel supports that feature */
+        mis->downtime_ctx = downtime_context_new();
+        new_features |= UFFD_FEATURE_THREAD_ID;
So here I know why in patch 2 new_features == 0...

If I were you, I would like the series be done in below 4 patches:

1. update header
2. introduce THREAD_ID feature, and enable it conditionally
3. squash all the downtime thing (downtime context, calculation) in
    one patch here
4. introduce trace

IMHO that's clearer and easier for review. But I'm okay with current
as well as long as the maintainers (Dave/Juan) won't disagree. :)
In previous series, David asked me to split one patch into 2
[Qemu-devel] [PATCH 3/6] migration: add UFFD_FEATURE_THREAD_ID feature support

>There seem to be two parts to this:
>  a) Adding the mis parameter to ufd_version_check
>  b) Asking for the feature

>Please split it into two patches.

So in current patch set, I also added re-factoring, which was missed before
"migration: split ufd_version_check onto receive/request features part"


Thanks,

+    }
+#endif
+
      /* request features */
      if (new_features && !request_ufd_features(ufd, new_features)) {
          error_report("ufd_version_check failed: features %" PRIu64,
--
1.9.1



--
Best regards,
Alexey Perevalov



reply via email to

[Prev in Thread] Current Thread [Next in Thread]