qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [RFC] postcopy livemigration proposal


From: Isaku Yamahata
Subject: [Qemu-devel] [RFC] postcopy livemigration proposal
Date: Mon, 8 Aug 2011 12:24:39 +0900
User-agent: Mutt/1.5.19 (2009-01-05)

This mail is on "Yabusame: Postcopy Live Migration for Qemu/KVM"
on which we'll give a talk at KVM-forum.
The purpose of this mail is to letting developers know it in advance
so that we can get better feedback on its design/implementation approach
early before our starting to implement it.


Background
==========
* What's is postcopy livemigration
It is is yet another live migration mechanism for Qemu/KVM, which
implements the migration technique known as "postcopy" or "lazy"
migration. Just after the "migrate" command is invoked, the execution
host of a VM is instantaneously switched to a destination host.

The benefit is, total migration time is shorter because it transfer
a page only once. On the other hand precopy may repeat sending same pages
again and again because they can be dirtied.
The switching time from the source to the destination is several
hunderds mili seconds so that it enables quick load balancing.
For details, please refer to the papers.

We believe this is useful for others so that we'd like to merge this
feature into the upstream qemu/kvm. The existing implementation that
we have right now is very ad-hoc because it's for academic research.
For the upstream merge, we're starting to re-design/implement it and
we'd like to get feedback early.  Although many improvements/optimizations
are possible, we should implement/merge the simple/clean, but extensible
as well, one at first and then improve/optimize it later.

postcopy livemigration will be introduced as optional feature. The existing
precopy livemigration remains as default behavior.


* related links:
project page
http://sites.google.com/site/grivonhome/quick-kvm-migration

Enabling Instantaneous Relocation of Virtual Machines with a
Lightweight VMM Extension,
(proof-of-concept, ad-hoc prototype. not a new design)
http://grivon.googlecode.com/svn/pub/docs/ccgrid2010-hirofuchi-paper.pdf
http://grivon.googlecode.com/svn/pub/docs/ccgrid2010-hirofuchi-talk.pdf

Reactive consolidation of virtual machines enabled by postcopy live migration
(advantage for VM consolidation)
http://portal.acm.org/citation.cfm?id=1996125
http://www.emn.fr/x-info/ascola/lib/exe/fetch.php?media=internet:vtdc-postcopy.pdf

Qemu wiki
http://wiki.qemu.org/Features/PostCopyLiveMigration


Design/Implementation
=====================
The basic idea of postcopy livemigration is to use a sort of distributed
shared memory between the migration source and destination.

The migration procedure looks like
  - start migration
    stop the guest VM on the source and send the machine states except
    guest RAM to the destination
  - resume the guest VM on the destination without guest RAM contents
  - Hook guest access to pages, and pull page contents from the source
    This continues until all the pages are pulled to the destination

  The big picture is depicted at
  http://wiki.qemu.org/File:Postcopy-livemigration.png


There are several design points.
  - who takes care of pulling page contents.
    an independent daemon vs a thread in qemu
    The daemon approach is preferable because an independent daemon would
    easy for debug postcopy memory mechanism without qemu.
    If required, it wouldn't be difficult to convert a daemon into
    a thread in qemu

  - connection between the source and the destination
    The connection for live migration can be re-used after sending machine
    state.

  - transfer protocol
    The existing protocol that exists today can be extended.

  - hooking guest RAM access
    Introduce a character device to handle page fault.
    When page fault occurs, it queues page request up to user space daemon
    at the destination. And the daemon pulls page contents from the source
    and serves it into the character device. Then the page fault is resovlved.


* More on hooking guest RAM access
There are several candidate for the implementation. Our preference is
character device approach.

  - inserting hooks into everywhere in qemu/kvm
    This is impractical

  - backing store for guest ram
    a block device or a file can be used to back guest RAM.
    Thus hook the guest ram access.

    pros
    - new device driver isn't needed.
    cons
    - future improvement would be difficult
    - some KVM host feature(KSM, THP) wouldn't work

  - character device
    qemu mmap() the dedicated character device, and then hook page fault.

    pros
    - straght forward approach
    - future improvement would be easy
    cons
    - new driver is needed
    - some KVM host feature(KSM, THP) wouldn't work
      They checks if a given VMA is anonymous. This can be fixed.

  - swap device
    When creating guest, it is set up as if all the guest RAM is swapped out
    to a dedicated swap device, which may be nbd disk (or some kind of user
    space block device, BUSE?).
    When the VM tries to access memory, swap-in is triggered and IO to the
    swap device is issued. Then the IO to swap is routed to the daemon
    in user space with nbd protocol (or BUSE, AOE, iSCSI...). The daemon pulls
    pages from the migration source and services the IO request.

    pros
    - After the page transfer is complete, everything is same as normal case.
    - no new device driver isn't needed
    cons
    - future improvement would be difficult
    - administration: setting up nbd, swap device

Thanks in advance
-- 
yamahata



reply via email to

[Prev in Thread] Current Thread [Next in Thread]