|
From: | Barak Wasserstrom |
Subject: | Re: [Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm & KVM |
Date: | Mon, 13 Jan 2014 13:24:16 +0200 |
The good news is that we even ported vhost-net in our kvm-a9 hypervisor (refer: http://academic.odysci.com/article/1010113020064758/evaluation-of-a-server-grade-software-only-arm-hypervisor), and the throughput of vhost-net on that platform (with 1Gbps Ethernet) increased from 323Mbps to 435Mbps.Hi, Barak,We've tried vhost-net in kvm-arm on arndale Exynos-5250 board (it requires some patches in qemu and kvm, of course). It works (without irqfd support), however, the performance does not increase much. The throughput (iperf) of virtio-net and vhost-net are 93.5Mbps and 93.6Mbps respectively. I thought the result are because both virtio-net and vhost-net almost reached the limitation of 100Mbps Ethernet.
--
Ying-Shiuan Pan,
H Div., CCMA, ITRI, TW----
Best Regards,
潘穎軒Ying-Shiuan Pan2014/1/13 Peter Maydell <address@hidden>On 12 January 2014 21:49, Barak Wasserstrom <address@hidden> wrote:I have no idea, I'm afraid. I don't have enough time available to
> Thanks - I got virtio-net-device running now, but performance is terrible.
> When i look at the guest's ethernet interface features (ethtool -k eth0) i
> see all offload features are disabled.
> I'm using a virtual tap on the host (tap0 bridged to eth3).
> On the tap i also see all offload features are disabled, while on br0 and
> eth3 i see the expected offload features.
> Can this explain the terrible performance i'm facing?
> If so, how can this be changed?
> If not, what else can cause such bad performance?
> Do you know if vhost_net can be used on ARM Cortex A15 host/guest, even
> though the guest doesn't support PCI & MSIX?
investigate performance issues at the moment; if you find anything
specific you can submit patches...
thanks
-- PMM
[Prev in Thread] | Current Thread | [Next in Thread] |