qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] slower live-migration with XBZRLE


From: Vasilis Liaskovitis
Subject: [Qemu-devel] slower live-migration with XBZRLE
Date: Thu, 11 Oct 2012 18:26:41 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Hi,

I am testing XBZRLE compression with qemu-1.2 for live migration of large VM
and/or memory-intensive workloads. I have a 4GB guest that runs the memory r/w
load generator from the original patchset, see docs/xbzrle.txt or
http://lists.gnu.org/archive/html/qemu-devel/2012-07/msg01207.html

I have set xbzrle to "on" in both source/target, and default cache size in 
source
(I also tried using 1g cache size, during the test or with a new migration). The
migration starts but the ram transfer rate is very slow and migration total time
is very large.  Cache misses and overflows seem small as far as I can tell.

Here's example output from the source "info migrate" with xbzrle=on when it's 
done:

(qemu) info migrate
capabilities: xbzrle: on 
Migration status: completed
total time: 6530177 milliseconds
transferred ram: 4887726 kbytes
remaining ram: 0 kbytes
total ram: 4211008 kbytes
duplicate: 3126234 pages
normal: 43587 pages
normal bytes: 174348 kbytes
cache size: 268435456 bytes
xbzrle transferred: 4710325 kbytes
xbzrle pages: 266649315 pages
xbzrle cache miss: 43440
xbzrle overflow : 147

The same guest+workload migrates much faster with xbzrle=off. I would have
expected the opposite behaviour i.e with xbzrle=off, this guest+workload
combination would migrate very slowly or never end. 

Here's example output from the source "info migrate" with xbzrle=off when it's
done

(qemu) info migrate
capabilities: xbzrle: off 
Migration status: completed
total time: 10791 milliseconds
transferred ram: 220735 kbytes
remaining ram: 0 kbytes
total ram: 4211008 kbytes
duplicate: 1007476 pages
normal: 54938 pages
normal bytes: 219752 kbytes

Have I missed setting some other migration parameter? I tried using
migrate_set_speed to change the bandwidth limit to 1000000000 bytes/sec but it
didn't make any difference.

Are there any default parameters that would make xbzrle inefficient for this 
type
of workload? Has any one measured a point of diminishing returns where e.g.
encoding/decoding cpu-overhead makes the feature ineffective?

this was a live-migration performed on same host, but I have seen same behaviour
between 2 hosts. The test host was idle apart from the VMs.

sample command line:
-enable-kvm -M pc -smp 2,maxcpus=64 -cpu host -m 4096 -drive
file=/home/debian.img,if=none,id=drive-virtio-disk0,format=raw
-device 
virtio-blk-pci,bus=pci.0,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-vga std -netdev type=tap,id=guest0,vhost=on -device 
virtio-net-pci,netdev=guest0

thanks,

- Vasilis



reply via email to

[Prev in Thread] Current Thread [Next in Thread]