qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v2 01/12] mc: add documentation for micro-ch


From: Michael R. Hines
Subject: Re: [Qemu-devel] [RFC PATCH v2 01/12] mc: add documentation for micro-checkpointing
Date: Mon, 03 Mar 2014 14:08:47 +0800
User-agent: Mozilla/5.0 (X11; Linux i686; rv:24.0) Gecko/20100101 Thunderbird/24.3.0

On 02/21/2014 05:44 PM, Dr. David Alan Gilbert wrote:
It's not clear to me how much of this (or any) of this control loop should
be in QEMU or in the management software, but I would definitely agree
that a minimum of at least the ability to detect the situation and remedy
the situation should be in QEMU. I'm not entirely convince that the
ability to *decide* to remedy the situation should be in QEMU, though.
The management software access is low frequency, high latency; it should
be setting general parameters (max memory allowed, desired checkpoint
frequency etc) but I don't see that we can use it to do anything on
a sooner than a few second basis; so yes it can monitor things and
tweek the knobs if it sees the host as a whole is getting tight on RAM
etc - but we can't rely on it to throw in the breaks if this guest
suddenly decides to take bucket loads of RAM; something has to react
quickly in relation to previously set limits.

I agree - the boolean flag I mentioned previously would do just
that: setting the flag (or state, perhaps instead of boolean),
would indicate to QEMU to make a particular type of sacrifice:

A flag of "0" might mean "Throttle the guest in an emergency"
A flag of "1" might mean "Throttling is not acceptable, just let the guest use the extra memory" A flag of "2" might mean "Neither one is acceptable, fail now and inform the management software to restart somewhere else".

Or something to that effect........

If you block the guest from being checkpointed,
then what happens if there is a failure during that extended period?
We will have saved memory at the expense of availability.
If the active machine fails during this time then the secondary carries
on from it's last good snapshot in the knowledge that the active
never finished the new snapshot and so never uncorked it's previous packets.

If the secondary machine fails during this time then tha active drops
it's nascent snapshot and carries on.
Yes, that makes sense. Where would that policy go, though,
continuing the above concern?
I think there has to be some input from the management layer for failover,
because (as per my split-brain concerns) something has to make the decision
about which of the source/destination is to take over, and I don't
believe individual instances have that information.

Agreed - so the "ability" (as hinted on above) should be in QEMU,
but the decision to recover from the situation probably should not
be, where "recover" is defined as the VM is back in a fully running,
fully fault-tolerant protected state (potentially where the source VM
is on a different machine than it was before).


Well, that's simple: If there is a failure of the source, the destination
will simply revert to the previous checkpoint using the same mode
of operation. The lost ACKs that you're curious about only
apply to the checkpoint that is in progress. Just because a
checkpoint is in progress does not mean that the previous checkpoint
is thrown away - it is already loaded into the destination's memory
and ready to be activated.
I still don't see why, if the link between them fails, the destination
doesn't fall back it it's previous checkpoint, AND the source carries
on running - I don't see how they can differentiate which of them has failed.
I think you're forgetting that the source I/O is buffered - it doesn't
matter that the source VM is still running. As long as it's output is
buffered - it cannot have any non-fault-tolerant affect on the outside
world.

In the future, if a technician access the machine or the network
is restored, the management software can terminate the stale
source virtual machine.
I think going with my comment above; I'm working on the basis it's just
as likely for the destination to fail as it is for the source to fail,
and a destination failure shouldn't kill the source; and in the case
of a destination failure the source is going to have to let it's buffered
I/Os start going again.

Yes, that's correct, but only after management software knows about
the failure. If we're on a tightly-coupled fast lan, there's no reason
to believe that libvirt, for example, would be so slow that we cannot
wait a few extra (10s of?) milliseconds after destination failure to
choose a new destination and restart the previous checkpoint.

But if management *is* too slow, which is not unlikely, then I think
we should just tell the source to Migrate entirely and get out of that
environment.

Either way - this isn't something QEMU itself necessarily needs to
worry about - it just needs to know not to explode if the destination
fails and wait for instructions on what to do next.......

Alternatively, if the administrator "prefers" restarting the fault-tolerance
instead of Migration, we could have a QMP command that specifies
a "backup" destination (or even a "duplicate" destination) that QEMU
would automatically know about in the case of destination failure.

But, I wouldn't implement something like that until at least a first version
was accepted by the community.

- Michael




reply via email to

[Prev in Thread] Current Thread [Next in Thread]