qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] migration: broken ram_save_pending


From: Paolo Bonzini
Subject: Re: [Qemu-devel] migration: broken ram_save_pending
Date: Fri, 07 Feb 2014 00:49:11 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0

Il 06/02/2014 04:10, Alexey Kardashevskiy ha scritto:
>> > Ok, I thought Alexey was saying we are not redirtying that handful of 
>> > pages.
> 
> Every iteration we read the dirty map from KVM and send all dirty pages
> across the stream.

But we never finish because qemu_savevm_state_pending is only called _after_
the g_usleep?  And thus there's time for the guest to redirty those pages.
Does something like this fix it (of course for a proper pages the goto
should be eliminated)?

diff --git a/migration.c b/migration.c
index 7235c23..804c3bd 100644
--- a/migration.c
+++ b/migration.c
@@ -589,6 +589,7 @@ static void *migration_thread(void *opaque)
             } else {
                 int ret;
 
+final_phase:
                 DPRINTF("done iterating\n");
                 qemu_mutex_lock_iothread();
                 start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
@@ -640,10 +641,16 @@ static void *migration_thread(void *opaque)
             qemu_file_reset_rate_limit(s->file);
             initial_time = current_time;
             initial_bytes = qemu_ftell(s->file);
-        }
-        if (qemu_file_rate_limit(s->file)) {
-            /* usleep expects microseconds */
-            g_usleep((initial_time + BUFFER_DELAY - current_time)*1000);
+        } else if (qemu_file_rate_limit(s->file)) {
+            pending_size = qemu_savevm_state_pending(s->file, max_size);
+            DPRINTF("pending size %" PRIu64 " max %" PRIu64 "\n",
+                    pending_size, max_size);
+            if (pending_size >= max_size) {
+                /* usleep expects microseconds */
+                g_usleep((initial_time + BUFFER_DELAY - current_time)*1000);
+           } else {
+                goto final_phase;
+           }
         }
     }
 





reply via email to

[Prev in Thread] Current Thread [Next in Thread]