qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH 1/5] Exit if incoming migration fails


From: Juan Quintela
Subject: [Qemu-devel] Re: [PATCH 1/5] Exit if incoming migration fails
Date: Tue, 25 May 2010 20:37:07 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/23.1 (gnu/linux)

Luiz Capitulino <address@hidden> wrote:
> On Tue, 25 May 2010 16:21:01 +0200
> Juan Quintela <address@hidden> wrote:
>
>> Signed-off-by: Juan Quintela <address@hidden>
>> ---
>>  migration.c |   16 ++++++++++------
>>  migration.h |    2 +-
>>  vl.c        |    7 ++++++-
>>  3 files changed, 17 insertions(+), 8 deletions(-)
>> 

>  While I agree on the change, I have two comments:
>
> 1. By taking a look at the code I have the impression that most of the
>    fun failures will happen on the handler passed to qemu_set_fd_handler2(),
>    do you agree? Any plan to address that?

That is outgoing migration, not incoming migration.
Incoming migration in synchronous..


> 1. Is exit()ing the best thing to be done? I understand it's the easiest
>    and maybe better than nothing, but wouldn't it be better to enter in
>    paused-forever state so that clients can query and decide what to do?

For incoming migration, if it fails in the middle, every bet is off.
You are in a really inconsistent state, not sure which one, and if
migration was live, with the other host possibly retaking the disks to
continue.

In some cases, you can't do anything:
- you got passed an fd, and fd got closed/image corrupted/...
- you got passed an exec command like "exec: gzip -d < foo.gz"
  If gzip failed once, it will fail forever.

If you are running it by hand, cursor up + enter, and you are back
If you are using a management application, it is going to be easier to
restart the process that trying to cleanup everything.

Experience shows that people really tries to do weird things when
machine is in this state.

Later, Juan.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]