qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] create a single workqueue for each vm to update v


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [RFC] create a single workqueue for each vm to update vm irq routing table
Date: Tue, 26 Nov 2013 18:05:37 +0200

On Tue, Nov 26, 2013 at 02:56:10PM +0200, Gleb Natapov wrote:
> On Tue, Nov 26, 2013 at 01:47:03PM +0100, Paolo Bonzini wrote:
> > Il 26/11/2013 13:40, Zhanghaoyu (A) ha scritto:
> > > When guest set irq smp_affinity, VMEXIT occurs, then the vcpu thread will 
> > > IOCTL return to QEMU from hypervisor, then vcpu thread ask the hypervisor 
> > > to update the irq routing table,
> > > in kvm_set_irq_routing, synchronize_rcu is called, current vcpu thread is 
> > > blocked for so much time to wait RCU grace period, and during this 
> > > period, this vcpu cannot provide service to VM,
> > > so those interrupts delivered to this vcpu cannot be handled in time, and 
> > > the apps running on this vcpu cannot be serviced too.
> > > It's unacceptable in some real-time scenario, e.g. telecom. 
> > > 
> > > So, I want to create a single workqueue for each VM, to asynchronously 
> > > performing the RCU synchronization for irq routing table, 
> > > and let the vcpu thread return and VMENTRY to service VM immediately, no 
> > > more need to blocked to wait RCU grace period.
> > > And, I have implemented a raw patch, took a test in our telecom 
> > > environment, above problem disappeared.
> > 
> > I don't think a workqueue is even needed.  You just need to use call_rcu
> > to free "old" after releasing kvm->irq_lock.
> > 
> > What do you think?
> > 
> It should be rate limited somehow. Since it guest triggarable guest may cause
> host to allocate a lot of memory this way.

The checks in __call_rcu(), should handle this I think.  These keep a per-CPU
counter, which can be adjusted via rcutree.blimit, which defaults
to taking evasive action if more than 10K callbacks are waiting on a
given CPU.



> Is this about MSI interrupt affinity? IIRC changing INT interrupt
> affinity should not trigger kvm_set_irq_routing update. If this is about
> MSI only then what about changing userspace to use KVM_SIGNAL_MSI for
> MSI injection?
> 
> --
>                       Gleb.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]