qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v2 01/12] mc: add documentation for micro-ch


From: Michael R. Hines
Subject: Re: [Qemu-devel] [RFC PATCH v2 01/12] mc: add documentation for micro-checkpointing
Date: Thu, 20 Feb 2014 09:17:24 +0800
User-agent: Mozilla/5.0 (X11; Linux i686; rv:24.0) Gecko/20100101 Thunderbird/24.3.0

On 02/19/2014 07:27 PM, Dr. David Alan Gilbert wrote:

I was just wondering if a separate 'max buffer size' knob would allow
you to more reasonably bound memory without setting policy; I don't think
people like having potentially x2 memory.

Note: Checkpoint memory is not monotonic in this patchset (which
is unique to this implementation). Only if the guest actually dirties
100% of it's memory between one checkpoint to the next will
the host experience 2x memory usage for a short period of time.

The patch has a 'slab' mechanism built in to it which implements
a water-mark style policy that throws away unused portions of
the 2x checkpoint memory if later checkpoints are much smaller
(which is likely to be the case if the writable working set size changes).

However, to answer your question: Such a knob could be achieved, but
the same could be achieved simply by tuning the checkpoint frequency
itself. Memory usage would thus be a function of the checkpoint frequency.

If the guest application was maniacal, banging away at all the memory,
there's very little that can be done in the first place, but if the guest application
was mildly busy, you don't want to throw away your ability to be fault
tolerant - you would just need more frequent checkpoints to keep up with
the dirty rate.

Once the application died down - the water-mark policy would kick in
and start freeing checkpoint memory. (Note: this policy happens on
both sides in the patchset because the patch has to be fully compatible
with RDMA memory pinning).

What is *not* exposed, however, is the watermark knobs themselves,
I definitely think that needs to be exposed - that would also get you a similar
control to 'max buffer size' - you could place a time limit on the
slab list in the patch or something like that.......



Good question in general - I'll add it to the FAQ. The patch implements
a basic 'transaction' mechanism in coordination with an outbound I/O
buffer (documented further down). With these two things in
places, split-brain is not possible because the destination is not running.
We don't allow the destination to resume execution until a committed
transaction has been acknowledged by the destination and only until
then do we allow any outbound network traffic to be release to the
outside world.
Yeh I see the IO buffer, what I've not figured out is how:
   1) MC over TCP/IP gets an acknowledge on the source to know when
      it can unplug it's buffer.

Only partially correct (See the steps on the wiki). There are two I/O
buffers at any given time which protect against a split-brain scenario:
One buffer for the current checkpoint that is being generated (running VM)
and one buffer for the checkpoint that is being committed in a transaction.

   2) Lets say the MC connection fails, so that ack never arrives,
      the source must assume the destination has failed and release it's
      packets and carry on.

Only the packets for Buffer A are released for the current committed
checkpoint after a completed transaction. The packets for Buffer B
(the current running VM) are still being held up until the next transaction starts.
Later once the transaction completes and A is released, B becomes the
new A and a new buffer is installed to become the new Buffer B for
the current running VM.


      The destination must assume the source has failed and take over.

The destination must also receive an ACK. The ack goes both ways.

Once the source and destination both acknowledge a completed
transation does the source VM resume execution - and even then
it's packets are still being buffered until the next transaction starts.
(That's why it's important to checkpoint as frequently as possible).


   3) If we're relying on TCP/IP timeout that's quite long.


Actually, my experience is been that TCP seems to have more than
one kind of timeout - if receiver is not responding *at all* - it seems that
TCP has a dedicated timer for that. The socket API immediately
sends back an error code and the patchset closes the conneciton
on the destination and recovers.

No, I wasn't thinking of vmsplice; I just have vague memories of suggestions
of the use of Intel's I/OAT, graphics cards, etc for doing things like page
zeroing and DMAing data around; I can see there is a dmaengine API in the
kernel, I haven't found where if anywhere that is available to userspace.

2) Using COW: Actually, I think that's an excellent idea. I've bounced that
      around with my colleagues, but we simply didn't have the manpower
      to implement it and benchmark it. There was also some concern about
      performance: Would the writable working set of the guest be so
active/busy
      that COW would not get you much benefit? I think it's worth a try.
      Patches welcome =)
It's possible that might be doable with some of the same tricks I'm
looking at for post-copy, I'll see what I can do.

That's great news - I'm very interested to see how this applies
to post-copy and any kind patches.

- Michael




reply via email to

[Prev in Thread] Current Thread [Next in Thread]