qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 19/23] userfaultfd: activate syscall


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH 19/23] userfaultfd: activate syscall
Date: Mon, 14 Sep 2015 19:53:35 +0100
User-agent: Mutt/1.5.23 (2014-03-12)

* Bharata B Rao (address@hidden) wrote:
> (cc trimmed since this looks like an issue that is contained within QEMU)
> 
> On Tue, Sep 08, 2015 at 03:13:56PM +0100, Dr. David Alan Gilbert wrote:
> > * Bharata B Rao (address@hidden) wrote:
> > > On Tue, Sep 08, 2015 at 01:46:52PM +0100, Dr. David Alan Gilbert wrote:
> > > > * Bharata B Rao (address@hidden) wrote:
> > > > > On Tue, Sep 08, 2015 at 09:59:47AM +0100, Dr. David Alan Gilbert 
> > > > > wrote:
> > > > > > * Bharata B Rao (address@hidden) wrote:
> > > > > > > In fact I had successfully done postcopy migration of sPAPR guest 
> > > > > > > with
> > > > > > > this setup.
> > > > > > 
> > > > > > Interesting - I'd not got that far myself on power; I was hitting a 
> > > > > > problem
> > > > > > loading htab ( htab_load() bad index 2113929216 (14848+0 entries) 
> > > > > > in htab stream (htab_shift=25) )
> > > > > > 
> > > > > > Did you have to make any changes to the qemu code to get that happy?
> > > > > 
> > > > > I should have mentioned that I tried only QEMU driven migration within
> > > > > the same host using wp3-postcopy branch of your tree. I don't see the
> > > > > above issue.
> > > > > 
> > > > > (qemu) info migrate
> > > > > capabilities: xbzrle: off rdma-pin-all: off auto-converge: off 
> > > > > zero-blocks: off compress: off x-postcopy-ram: on 
> > > > > Migration status: completed
> > > > > total time: 39432 milliseconds
> > > > > downtime: 162 milliseconds
> > > > > setup: 14 milliseconds
> > > > > transferred ram: 1297209 kbytes
> > > > > throughput: 270.72 mbps
> > > > > remaining ram: 0 kbytes
> > > > > total ram: 4194560 kbytes
> > > > > duplicate: 734015 pages
> > > > > skipped: 0 pages
> > > > > normal: 318469 pages
> > > > > normal bytes: 1273876 kbytes
> > > > > dirty sync count: 4
> > > > > 
> > > > > I will try migration between different hosts soon and check.
> > > > 
> > > > I hit that on the same host; are you sure you've switched into postcopy 
> > > > mode;
> > > > i.e. issued a migrate_start_postcopy before the end of migration?
> > > 
> > > Sorry I was following your discussion with Li in this thread
> > > 
> > > https://www.marc.info/?l=qemu-devel&m=143035620026744&w=4
> > > 
> > > and it wasn't obvious to me that anything apart from turning on the
> > > x-postcopy-ram capability was required :(
> > 
> > OK.
> > 
> > > So I do see the problem now.
> > > 
> > > At the source
> > > -------------
> > > Error reading data from KVM HTAB fd: Bad file descriptor
> > > Segmentation fault
> > > 
> > > At the target
> > > -------------
> > > htab_load() bad index 2113929216 (14336+0 entries) in htab stream 
> > > (htab_shift=25)
> > > qemu-system-ppc64: error while loading state section id 56(spapr/htab)
> > > qemu-system-ppc64: postcopy_ram_listen_thread: loadvm failed: -22
> > > qemu-system-ppc64: VQ 0 size 0x100 Guest index 0x0 inconsistent with Host 
> > > index 0x1f: delta 0xffe1
> > > qemu-system-ppc64: error while loading state for instance 0x0 of device 
> > > 'address@hidden:00.0/virtio-net'
> > > *** Error in `./ppc64-softmmu/qemu-system-ppc64': corrupted double-linked 
> > > list: 0x00000100241234a0 ***
> > > ======= Backtrace: =========
> > > /lib64/power8/libc.so.6Segmentation fault
> > 
> > Good - my current world has got rid of the segfaults/corruption in the 
> > cleanup on power - but those
> > are only after it stumbled over the htab problem.
> > 
> > I don't know the innards of power/htab, so if you've got any pointers on 
> > what upset it
> > I'd be happy for some pointers.
>  
> When migrate_start_postcopy is issued, for HTAB, the SaveStateEntry
> save_live_iterate call is coming after save_live_complete. In case of HTAB,
> the spapr->htab_fd is closed when HTAB saving is completed in
> save_live_complete handler. When save_live_iterate call comes after this,
> we end up accessing an invalid fd resulting in the migration failure
> we are seeing here.
> 
> - With postcopy migration, is it expected to get a save_live_iterate
>   call after save_live_complete ? IIUC, save_live_complete signals the
>   completion of the saving. Is save_live_iterate handler expected to
>   handle this condition ?
> 
> I am able to get past this failure and get migration to complete successfully
> by this below hack where I teach save_live_iterate handler to ignore
> the requests after save_live_complete has been called.

The fix I'm going with is included below; only smoke tested on x86 so far,
I'll grab a Power to test it on before I republish this set.
(and this is on my working tree rather than the version I last published,
but it should be reasonably close)


From c51e5f8e8cef4ca5a47c1446803a9b35aa7d738d Mon Sep 17 00:00:00 2001
From: "Dr. David Alan Gilbert" <address@hidden>
Date: Mon, 14 Sep 2015 19:27:45 +0100
Subject: [PATCH] Don't iterate on precopy-only devices during postcopy

During the postcopy phase we must not call the iterate method on
precopy-only devices, since they may have done some cleanup during
the _complete call at the end of the precopy phase.

Signed-off-by: Dr. David Alan Gilbert <address@hidden>
---
 include/sysemu/sysemu.h |  2 +-
 migration/migration.c   |  2 +-
 migration/savevm.c      | 13 +++++++++++--
 3 files changed, 13 insertions(+), 4 deletions(-)

diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
index ccf278e..018a628 100644
--- a/include/sysemu/sysemu.h
+++ b/include/sysemu/sysemu.h
@@ -108,7 +108,7 @@ bool qemu_savevm_state_blocked(Error **errp);
 void qemu_savevm_state_begin(QEMUFile *f,
                              const MigrationParams *params);
 void qemu_savevm_state_header(QEMUFile *f);
-int qemu_savevm_state_iterate(QEMUFile *f);
+int qemu_savevm_state_iterate(QEMUFile *f, bool postcopy);
 void qemu_savevm_state_complete_postcopy(QEMUFile *f);
 void qemu_savevm_state_complete_precopy(QEMUFile *f);
 void qemu_savevm_state_cancel(void);
diff --git a/migration/migration.c b/migration/migration.c
index 0468bc4..e9e8f6a 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1589,7 +1589,7 @@ static void *migration_thread(void *opaque)
                     continue;
                 }
                 /* Just another iteration step */
-                qemu_savevm_state_iterate(s->file);
+                qemu_savevm_state_iterate(s->file, entered_postcopy);
             } else {
                 trace_migration_thread_low_pending(pending_size);
 
diff --git a/migration/savevm.c b/migration/savevm.c
index 42f67a6..9ae9841 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -931,7 +931,7 @@ void qemu_savevm_state_begin(QEMUFile *f,
  *   0 : We haven't finished, caller have to go again
  *   1 : We have finished, we can go to complete phase
  */
-int qemu_savevm_state_iterate(QEMUFile *f)
+int qemu_savevm_state_iterate(QEMUFile *f, bool postcopy)
 {
     SaveStateEntry *se;
     int ret = 1;
@@ -946,6 +946,15 @@ int qemu_savevm_state_iterate(QEMUFile *f)
                 continue;
             }
         }
+        /*
+         * In the postcopy phase, any device that doesn't know how to
+         * do postcopy should have saved it's state in the _complete
+         * call that's already run, it might get confused if we call
+         * iterate afterwards.
+         */
+        if (postcopy && !se->ops->save_live_complete_postcopy) {
+            return 0;
+        }
         if (qemu_file_rate_limit(f)) {
             return 0;
         }
@@ -1160,7 +1169,7 @@ static int qemu_savevm_state(QEMUFile *f, Error **errp)
     qemu_mutex_lock_iothread();
 
     while (qemu_file_get_error(f) == 0) {
-        if (qemu_savevm_state_iterate(f) > 0) {
+        if (qemu_savevm_state_iterate(f, false) > 0) {
             break;
         }
     }
-- 
2.4.3

--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]