qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores
Date: Tue, 13 Nov 2012 18:46:04 +0200

On Tue, Nov 13, 2012 at 05:35:55PM +0100, Peter Lieven wrote:
> 
> Am 13.11.2012 um 17:33 schrieb Michael S. Tsirkin:
> 
> > On Tue, Nov 13, 2012 at 06:22:56PM +0200, Michael S. Tsirkin wrote:
> >> On Tue, Nov 13, 2012 at 12:49:03PM +0100, Peter Lieven wrote:
> >>> 
> >>> On 09.11.2012 19:03, Peter Lieven wrote:
> >>>> Remark:
> >>>> If i disable interrupts on CPU1-3 for virtio the performance is ok again.
> >>>> 
> >>>> Now we need someone with deeper knowledge of the in-kernel irqchip and 
> >>>> the
> >>>> virtio/vhost driver development to say if this is a regression in 
> >>>> qemu-kvm
> >>>> or a problem with the old virtio drivers if they receive the interrupt on
> >>>> different CPUs.
> >>> anyone?
> >> 
> >> Looks like the problem is not in the guest: I tried ubuntu guest
> >> on a rhel host, I got 8GB/s with vhost and 4GB/s without
> >> on a host to guest banchmark.
> >> 
> > 
> > Tried with upstream qemu on rhel kernel and that's even a bit faster.
> > So it's ubuntu kernel. vanilla 2.6.32 didn't have vhost at all
> > so maybe their vhost backport is broken insome way.
> 
> That might be. I think Dietmar was reporting that he had problems
> with Debian. They likely use the same back port.
> 
> Is it correct that with kernel_irqchip the IRQs are
> delivered to all vCPUs? Without kernel_irqchip (in qemu-kvm 1.0.1
> for instance) they where delivered only to vCPU 0. This scenario
> was working.
> 
> Peter

You need to look at how MSI tables are programmed to check if it's
OK - guest can program MSI to do it like that. pciutils does not
do this unfortunately so you'll have to write a bit of C code if you
want to do this.

-- 
MST



reply via email to

[Prev in Thread] Current Thread [Next in Thread]