qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/2] add non-arbitrary migration stop condition


From: Anthony Liguori
Subject: Re: [Qemu-devel] [PATCH 1/2] add non-arbitrary migration stop condition
Date: Thu, 21 May 2009 21:08:30 -0500
User-agent: Thunderbird 2.0.0.21 (X11/20090320)

Glauber Costa wrote:

Signed-off-by: Glauber Costa <address@hidden>
---
 migration.c |    7 +++++++
 migration.h |    2 ++
 vl.c        |   14 ++++++++++++--
 3 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/migration.c b/migration.c
index 401383c..4036e64 100644
--- a/migration.c
+++ b/migration.c
@@ -107,6 +107,13 @@ void do_migrate_set_speed(Monitor *mon, const char *value)
} +static int64_t max_downtime = 30000000;

In units of..? Wouldn't it make sense to store this in milliseconds or microseconds as opposed to nanoseconds?

+
+int64_t migrate_max_downtime(void)
+{
+    return max_downtime;
+}
+
 void do_info_migrate(Monitor *mon)
 {
     MigrationState *s = current_migration;
diff --git a/migration.h b/migration.h
index 696618d..b0637ba 100644
--- a/migration.h
+++ b/migration.h
@@ -55,6 +55,8 @@ void do_migrate_cancel(Monitor *mon);
void do_migrate_set_speed(Monitor *mon, const char *value); +int64_t migrate_max_downtime(void);
+
 void do_info_migrate(Monitor *mon);
int exec_start_incoming_migration(const char *host_port);
diff --git a/vl.c b/vl.c
index 346da57..5ca06f9 100644
--- a/vl.c
+++ b/vl.c
@@ -3235,7 +3235,6 @@ static int ram_save_block(QEMUFile *f)
     return found;
 }
-static ram_addr_t ram_save_threshold = 10;
 static uint64_t bytes_transferred = 0;
static ram_addr_t ram_save_remaining(void)
@@ -3269,6 +3268,9 @@ uint64_t ram_bytes_total(void)
 static int ram_save_live(QEMUFile *f, int stage, void *opaque)
 {
     ram_addr_t addr;
+    uint64_t bytes_transferred_last;
+    double bwidth = 0;
+    int64_t expected_time = 0;
if (stage == 1) {
         /* Make sure all dirty bits are set */
@@ -3283,6 +3285,9 @@ static int ram_save_live(QEMUFile *f, int stage, void 
*opaque)
         qemu_put_be64(f, last_ram_offset | RAM_SAVE_FLAG_MEM_SIZE);
     }
+ bytes_transferred_last = bytes_transferred;
+    bwidth = get_clock();
+
     while (!qemu_file_rate_limit(f)) {
         int ret;
@@ -3292,6 +3297,9 @@ static int ram_save_live(QEMUFile *f, int stage, void *opaque)
             break;
     }
+ bwidth = get_clock() - bwidth;

This isn't quite right. If you hit the rate limit, you're calculating bandwidth based on before you hit the rate limit. But if the user specified a rate limit, they want you to adhere to that limit. To put it another, you could consume twice the rate limited bandwidth in order to complete the migration.

Have you measured the actual down time in a guest too? I suspect your downtime is significantly higher now. I'm curious how closely your threshold matches real world observation.

Regards,

Anthony Liguori




reply via email to

[Prev in Thread] Current Thread [Next in Thread]