qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] augment info migrate with page status


From: Anthony Liguori
Subject: Re: [Qemu-devel] [PATCH] augment info migrate with page status
Date: Thu, 21 May 2009 08:14:23 -0500
User-agent: Thunderbird 2.0.0.21 (X11/20090320)

Dor Laor wrote:
 static ram_addr_t ram_save_threshold = 10;
+static ram_addr_t pages_transferred = 0;

It would be nice to zero pages_transferred each migration operation.
ram_save_threshold is really to small. From Uri's past measurements, as value of 50 is a
better suite. Alternately it can be parametrized by the monitor command.

In general there is small drawback in the current approach:
The way bandwidth is capped, iirc, in every second you start consuming migration bandwidth. If the bandwidth allocation was consumed after 100msec, you'll wait 900msec. In this period, mgmt app reading the ram_save_remaining will notice that migration does
not progress and might either increase bandwidth or stop the guest.
That's why #of no-progress-iteration has advantage.

If I were implementing this in libvirt, here's what I would do:

B = MB/sec bandwidth limit
S = guest size in MB
C = some constant factor, perhaps 4-5

T = S / B * C

1) Wait for T seconds or until migration completes.
2) If timeout occurred:
  a) M = actual transfer rate for migration in MB/sec
  b) If M < B, T1 = S / M * C
  c) T = T1 - T
  d) If T <= 0, migration failed
  d) else goto 1

Basically, this institutes a policy that a migration must complete after transferring C * guest_size amount of data. It adjusts for observed bandwidth rate vs. capped. It makes sense from an administrative perspective because you are probably only willing to waste so much network bandwidth on attempting to migrate. Obviously, C and B are tunables that heavily depend on the relatively priority of the migration.

Whether you force a non-live migration after failure of a live migration is an administrative decision.

--
Regards,

Anthony Liguori





reply via email to

[Prev in Thread] Current Thread [Next in Thread]