qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/2] migration: Dynamic cpu throttling for auto-


From: Jason J. Herne
Subject: Re: [Qemu-devel] [PATCH 2/2] migration: Dynamic cpu throttling for auto-converge
Date: Mon, 01 Jun 2015 13:16:31 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0

On 06/01/2015 11:32 AM, Dr. David Alan Gilbert wrote:
* Jason J. Herne (address@hidden) wrote:
Remove traditional auto-converge static 30ms throttling code and replace it
with a dynamic throttling algorithm.

Additionally, be more aggressive when deciding when to start throttling.
Previously we waited until four unproductive memory passes. Now we begin
throttling after only two unproductive memory passes. Four seemed quite
arbitrary and only waiting for two passes allows us to complete the migration
faster.

Signed-off-by: Jason J. Herne <address@hidden>
Reviewed-by: Matthew Rosato <address@hidden>
---
  arch_init.c           | 95 +++++++++++++++++----------------------------------
  migration/migration.c |  9 +++++
  2 files changed, 41 insertions(+), 63 deletions(-)

diff --git a/arch_init.c b/arch_init.c
index 23d3feb..73ae494 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -111,9 +111,7 @@ int graphic_depth = 32;
  #endif

  const uint32_t arch_type = QEMU_ARCH;
-static bool mig_throttle_on;
  static int dirty_rate_high_cnt;
-static void check_guest_throttling(void);

  static uint64_t bitmap_sync_count;

@@ -487,6 +485,31 @@ static size_t save_page_header(QEMUFile *f, RAMBlock 
*block, ram_addr_t offset)
      return size;
  }

+/* Reduce amount of guest cpu execution to hopefully slow down memory writes.
+ * If guest dirty memory rate is reduced below the rate at which we can
+ * transfer pages to the destination then we should be able to complete
+ * migration. Some workloads dirty memory way too fast and will not effectively
+ * converge, even with auto-converge. For these workloads we will continue to
+ * increase throttling until the guest is paused long enough to complete the
+ * migration. This essentially becomes a non-live migration.
+ */
+static void mig_throttle_guest_down(void)
+{
+    CPUState *cpu;
+
+    CPU_FOREACH(cpu) {
+        /* We have not started throttling yet. Lets start it.*/
+        if (!cpu_throttle_active(cpu)) {
+            cpu_throttle_start(cpu, 0.2);
+        }
+
+        /* Throttling is already in place. Just increase the throttling rate */
+        else {
+            cpu_throttle_start(cpu, cpu_throttle_get_ratio(cpu) * 2);
+        }

Now that migration has migrate_parameters, it would be best to replace
the magic numbers (the 0.2, the *2 - anything else?)  by parameters that can
change the starting throttling and increase rate.  It would probably also be
good to make the current throttling rate visible in info somewhere; maybe
info migrate?


I did consider all of this. However, I don't think that the controls
this patch provides are an ideal throttling mechanism. I suspect someone with vcpu/scheduling experience could whip up something more user friendly and cleaner.
I merely propose this because it seems better than what we have today for
auto-converge.

Also, I'm not sure how useful the information really is to the user. The fact that it is a ratio instead of a percentage might be confusing. Also, I suspect it is not truly very accurate. Again, I was going for "make it better", not "make it perfect".

Lastly, if we define this external interface we are kind of stuck with it, yes? In this regard we should be sure that this is how we want cpu throttling to work. Alternatively, I propose to accept this patch set as-is and then work on a real vcpu Throttling mechanism that can be used for auto-converge as well as a user controllable guest throttling/limiting mechanism. Once that is in place we can migrate (no pun intended) the auto-converge code to the new way and remove this
stuff.

With all of that said, I'm willing to provide the requested controls if we really
feel the pros outweigh the cons. Thanks for your review :).

...

--
-- Jason J. Herne (address@hidden)




reply via email to

[Prev in Thread] Current Thread [Next in Thread]