Hi,
I was reading qemu's (qemu-kvm-0.13.0's, to be specific) live
migration code to unterstand how the iterative dirty page transfer is
implemented. During this I noticed that ram_save_live in arch_init.c
is called quite often, more often than I expected (approx. 200 times
for an idle 500MiB VM). I found out that this is because of while
(!qemu_file_rate_limit(f)), which evaluates very often to true, and as
there are remaining dirty pages, ram_save_live is called again.
As I had set no bandwith limit in the libvirt call, I digged deeper
and found a hard coded maximum bandwidth in migration.c:
/* Migration speed throttling */
static uint32_t max_throttle = (32 << 20);
Using a packet sniffer I verified that max_throttle is Byte/s, here of
course 32 MiB/s. Additionally, it translates directly to network
bandwidth - I was not sure about that, as the bandwidth measured in
ram_save_live seems to be buffer/memory subsystem bandwidth?