qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/7] x86: Rework KVM-defaults compat code, enabl


From: Eduardo Habkost
Subject: Re: [Qemu-devel] [PATCH 0/7] x86: Rework KVM-defaults compat code, enable kvm_pv_unhalt by default
Date: Fri, 13 Oct 2017 16:01:38 -0300
User-agent: Mutt/1.9.0 (2017-09-02)

On Wed, Oct 11, 2017 at 04:19:38PM -0400, Waiman Long wrote:
> On 10/10/2017 03:41 PM, Eduardo Habkost wrote:
> > On Tue, Oct 10, 2017 at 02:07:25PM -0400, Waiman Long wrote:
> >> On 10/10/2017 11:50 AM, Eduardo Habkost wrote:
> >>>> Yes.  Another possibility is to enable it when there is >1 NUMA node in
> >>>> the guest.  We generally don't do this kind of magic but higher layers
> >>>> (oVirt/OpenStack) do.
> >>> Can't the guest make this decision, instead of the host?
> >> By guest, do you mean the guest OS itself or the admin of the guest VM?
> > It could be either.  But even if action is required from the
> > guest admin to get better performance in some cases, I'd argue
> > that the default behavior of a Linux guest shouldn't cause a
> > performance regression if the host stops hiding a feature in
> > CPUID.
> >
> >> I am thinking about maybe adding kernel boot command line option like
> >> "unfair_pvspinlock_cpu_threshold=4" which will instruct the OS to use
> >> unfair spinlock if the number of CPUs is 4 or less, for example. The
> >> default value of 0 will have the same behavior as it is today. Please
> >> let me know what you guys think about that.
> > If that's implemented, can't Linux choose a reasonable default
> > for unfair_pvspinlock_cpu_threshold that won't require the admin
> > to manually configure it on most cases?
> 
> It is hard to have a fixed value as it depends on the CPUs being used as
> well as the kind of workloads that are being run. Besides, using unfair
> locks have the undesirable side effect of being subject to lock
> starvation under certain circumstances. So we may not work it to be
> turned on by default. Customers have to take their own risk if they want
> that.

Probably I am not seeing all variables involved, so pardon my
confusion.  Would unfair_pvspinlock_cpu_threshold > num_cpus just
disable usage of kvm_pv_unhalt, or make the guest choose a
completely different spinlock implementation?

Is the current default behavior of Linux guests when
kvm_pv_unhalt is unavailable a good default?  If using
kvm_pv_unhalt is not always a good idea, why do Linux guests
default to eagerly trying to use it only because the host says
it's available?

-- 
Eduardo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]