qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 0/1] QEMU: Dirty quota-based throttling of vcpus


From: Shivam Kumar
Subject: Re: [RFC PATCH 0/1] QEMU: Dirty quota-based throttling of vcpus
Date: Tue, 6 Dec 2022 11:18:52 +0530
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.5.0



On 21/11/22 4:24 am, Shivam Kumar wrote:
This patchset is the QEMU-side implementation of a (new) dirty "quota"
based throttling algorithm that selectively throttles vCPUs based on their
individual contribution to overall memory dirtying and also dynamically
adapts the throttle based on the available network bandwidth.

Overview
----------
----------

To throttle memory dirtying, we propose to set a limit on the number of
pages a vCPU can dirty in given fixed microscopic size time intervals. This
limit depends on the network throughput calculated over the last few
intervals so as to throttle the vCPUs based on available network bandwidth.
We are referring to this limit as the "dirty quota" of a vCPU and
the fixed size intervals as the "dirty quota intervals".

One possible approach to distributing the overall scope of dirtying for a
dirty quota interval is to equally distribute it among all the vCPUs. This
approach to the distribution doesn't make sense if the distribution of
workloads among vCPUs is skewed. So, to counter such skewed cases, we
propose that if any vCPU doesn't need its quota for any given dirty
quota interval, we add this quota to a common pool. This common pool (or
"common quota") can be consumed on a first come first serve basis
by all vCPUs in the upcoming dirty quota intervals.


Design
----------
----------

Userspace                                 KVM

[At the start of dirty logging]
Initialize dirty quota to some
non-zero value for each vcpu.    ----->   [When dirty logging starts]
                                           Start incrementing dirty count
                                           for every dirty by the vcpu.

                                           [Dirty count equals/exceeds
                                           dirty quota]
If the vcpu has already claimed  <-----   Exit to userspace.
its quota for the current dirty
quota interval:

         1) If common quota is
         available, give the vcpu
         its quota from common pool.

         2) Else sleep the vcpu until
         the next interval starts.

Give the vcpu its share for the
current(fresh) dirty quota       ----->  Continue dirtying with the newly
interval.                                received quota.

[At the end of dirty logging]
Set dirty quota back to zero
for every vcpu.                 ----->   Throttling disabled.


References
----------
----------

KVM Forum Talk: https://www.youtube.com/watch?v=ZBkkJf78zFA
Kernel Patchset:
https://lore.kernel.org/all/20221113170507.208810-1-shivam.kumar1@nutanix.com/


Note
----------
----------

We understand that there is a good scope of improvement in the current
implementation. Here is a list of things we are working on:
1) Adding dirty quota as a migration capability so that it can be toggled
through QMP command.
2) Adding support for throttling guest DMAs.
3) Not enabling dirty quota for the first migration iteration.
4) Falling back to current auto-converge based throttling in cases where dirty
quota throttling can overthrottle.

Please stay tuned for the next patchset.

Shivam Kumar (1):
   Dirty quota-based throttling of vcpus

  accel/kvm/kvm-all.c       | 91 +++++++++++++++++++++++++++++++++++++++
  include/exec/memory.h     |  3 ++
  include/hw/core/cpu.h     |  5 +++
  include/sysemu/kvm_int.h  |  1 +
  linux-headers/linux/kvm.h |  9 ++++
  migration/migration.c     | 22 ++++++++++
  migration/migration.h     | 31 +++++++++++++
  softmmu/memory.c          | 64 +++++++++++++++++++++++++++
  8 files changed, 226 insertions(+)


It'd be great if I could get some more feedback before I send v2. Thanks.

CC: Peter Xu, Juan Quintela



reply via email to

[Prev in Thread] Current Thread [Next in Thread]