qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [Bug 602336] [NEW] bad network performance with 10Gbit


From: zerocoolx
Subject: [Qemu-devel] [Bug 602336] [NEW] bad network performance with 10Gbit
Date: Tue, 06 Jul 2010 16:25:20 -0000

Public bug reported:

Hello,
I have trouble with the network performance inside my virtual machines. I don't 
know if this is realy a bug, but I didn't find a solution for this problem in 
other forums or maillists.

My KVM-Host machine is connected to a 10Gbit Network. All interfaces are
configured to a mtu of 4132. On this host I have no problems and I can
use the full bandwidth:

CPU_Info:
2x Intel Xeon X5570
flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 
clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm 
constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf 
pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 
popcnt lahf_lm ida tpr_shadow vnmi flexpriority ept vpid

KVM Version:
QEMU PC emulator version 0.12.3 (qemu-kvm-0.12.3), Copyright (c) 2003-2008 
Fabrice Bellard
0.12.3+noroms-0ubuntu9

KVM Host Kernel:
2.6.32-22-server #36-Ubuntu SMP Thu Jun 3 20:38:33 UTC 2010 x86_64 GNU/Linux

KVM Host OS:
Ubuntu 10.04 LTS
Codename: lucid

KVM Guest Kernel:
2.6.32-22-server #36-Ubuntu SMP Thu Jun 3 20:38:33 UTC 2010 x86_64 GNU/Linux

KVM Guest OS:
Ubuntu 10.04 LTS
Codename: lucid


# iperf -c 10.10.80.100 -w 65536 -p 12345 -t 60 -P4
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-60.0 sec 18.8 GBytes 2.69 Gbits/sec
[ 5] 0.0-60.0 sec 15.0 GBytes 2.14 Gbits/sec
[ 6] 0.0-60.0 sec 19.3 GBytes 2.76 Gbits/sec
[ 3] 0.0-60.0 sec 15.1 GBytes 2.16 Gbits/sec
[SUM] 0.0-60.0 sec 68.1 GBytes 9.75 Gbits/sec


Inside a virtual machine don't reach this result:

# iperf -c 10.10.80.100 -w 65536 -p 12345 -t 60 -P 4
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 5.65 GBytes 808 Mbits/sec
[ 4] 0.0-60.0 sec 5.52 GBytes 790 Mbits/sec
[ 5] 0.0-60.0 sec 5.66 GBytes 811 Mbits/sec
[ 6] 0.0-60.0 sec 5.70 GBytes 816 Mbits/sec
[SUM] 0.0-60.0 sec 22.5 GBytes 3.23 Gbits/sec

I only can use 3,23Gbits of 10Gbits. I use the virtio driver for all of
my vms, but I have also tried to use the e1000 nic device instead.

With starting the iperf performance test on multiple vms simultaneously
I can use the full bandwidth of the kvm host's interface. But only one
vm can't use the full bandwith. Is this a known limitation, or can I
improve this performance?

Does anyone have an idea how I can improve my network performance? It's
very important, because I want to use the network interface to boot all
vms via AOE (ATA over Ethernet).

If I mount a harddisk via AOE inside a vm I get only this results:
Write |CPU |Rewrite |CPU |Read |CPU
102440 |10 |51343 |5 |104249 |3

On the KVM Host I get those results on a mouted AOE Device:
Write |CPU |Rewrite |CPU |Read |CPU
205597 |19 |139118 |11 |391316 |11

If I mount the AOE Device directly on the kvm-host and put a virtual 
harddisk-file in it I got the following results inside a vm using this 
harddisk-file:
Write |CPU |Rewrite |CPU |Read |CPU
175140 |12 |136113 |24 |599989 |29

I have just tested vhost_net, but without success.
I have upgraded my kernel to 2.6.35-6 with vhost_net support and have
installed the qemu-kvm version from
git://git.kernel.org/pub/scm/linux/kernel/git/mst/qemu-kvm.git (0.12.50)
But I still have the same results as before.

I had already posted my problem into a few forums, but still got no
reply.

I would feel very happy if someone can help me.

best regards

** Affects: qemu
     Importance: Undecided
         Status: New

-- 
bad network performance with 10Gbit
https://bugs.launchpad.net/bugs/602336
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.

Status in QEMU: New

Bug description:
Hello,
I have trouble with the network performance inside my virtual machines. I don't 
know if this is realy a bug, but I didn't find a solution for this problem in 
other forums or maillists.

My KVM-Host machine is connected to a 10Gbit Network. All interfaces are 
configured to a mtu of 4132. On this host I have no problems and I can use the 
full bandwidth:

CPU_Info:
2x Intel Xeon X5570
flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 
clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm 
constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf 
pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 
popcnt lahf_lm ida tpr_shadow vnmi flexpriority ept vpid

KVM Version:
QEMU PC emulator version 0.12.3 (qemu-kvm-0.12.3), Copyright (c) 2003-2008 
Fabrice Bellard
0.12.3+noroms-0ubuntu9

KVM Host Kernel:
2.6.32-22-server #36-Ubuntu SMP Thu Jun 3 20:38:33 UTC 2010 x86_64 GNU/Linux

KVM Host OS:
Ubuntu 10.04 LTS
Codename: lucid

KVM Guest Kernel:
2.6.32-22-server #36-Ubuntu SMP Thu Jun 3 20:38:33 UTC 2010 x86_64 GNU/Linux

KVM Guest OS:
Ubuntu 10.04 LTS
Codename: lucid


# iperf -c 10.10.80.100 -w 65536 -p 12345 -t 60 -P4
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-60.0 sec 18.8 GBytes 2.69 Gbits/sec
[ 5] 0.0-60.0 sec 15.0 GBytes 2.14 Gbits/sec
[ 6] 0.0-60.0 sec 19.3 GBytes 2.76 Gbits/sec
[ 3] 0.0-60.0 sec 15.1 GBytes 2.16 Gbits/sec
[SUM] 0.0-60.0 sec 68.1 GBytes 9.75 Gbits/sec


Inside a virtual machine don't reach this result:

# iperf -c 10.10.80.100 -w 65536 -p 12345 -t 60 -P 4
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 5.65 GBytes 808 Mbits/sec
[ 4] 0.0-60.0 sec 5.52 GBytes 790 Mbits/sec
[ 5] 0.0-60.0 sec 5.66 GBytes 811 Mbits/sec
[ 6] 0.0-60.0 sec 5.70 GBytes 816 Mbits/sec
[SUM] 0.0-60.0 sec 22.5 GBytes 3.23 Gbits/sec

I only can use 3,23Gbits of 10Gbits. I use the virtio driver for all of my vms, 
but I have also tried to use the e1000 nic device instead.

With starting the iperf performance test on multiple vms simultaneously I can 
use the full bandwidth of the kvm host's interface. But only one vm can't use 
the full bandwith. Is this a known limitation, or can I improve this 
performance?

Does anyone have an idea how I can improve my network performance? It's very 
important, because I want to use the network interface to boot all vms via AOE 
(ATA over Ethernet).

If I mount a harddisk via AOE inside a vm I get only this results:
Write |CPU |Rewrite |CPU |Read |CPU
102440 |10 |51343 |5 |104249 |3

On the KVM Host I get those results on a mouted AOE Device:
Write |CPU |Rewrite |CPU |Read |CPU
205597 |19 |139118 |11 |391316 |11

If I mount the AOE Device directly on the kvm-host and put a virtual 
harddisk-file in it I got the following results inside a vm using this 
harddisk-file:
Write |CPU |Rewrite |CPU |Read |CPU
175140 |12 |136113 |24 |599989 |29

I have just tested vhost_net, but without success.
I have upgraded my kernel to 2.6.35-6 with vhost_net support and have
installed the qemu-kvm version from
git://git.kernel.org/pub/scm/linux/kernel/git/mst/qemu-kvm.git (0.12.50)
But I still have the same results as before.

I had already posted my problem into a few forums, but still got no
reply.

I would feel very happy if someone can help me.

best regards





reply via email to

[Prev in Thread] Current Thread [Next in Thread]