qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [virtio-dev] Re: [PAT


From: Jason Wang
Subject: Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
Date: Thu, 25 May 2017 20:31:09 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.1.1



On 2017年05月25日 20:22, Jason Wang wrote:

Even with vhost-pci to virito-net configuration, I think rx zerocopy could be achieved but not implemented in your driver (probably more easier in pmd).

Yes, it would be easier with dpdk pmd. But I think it would not be important in the NFV use case,
since the data flow goes to one direction often.

Best,
Wei


I would say let's don't give up on any possible performance optimization now. You can do it in the future.

If you still want to keep the copy in both tx and rx, you'd better:

- measure the performance of larger packet size other than 64B
- consider whether or not it's a good idea to do it in vcpu thread, or move it to another one(s)

Thanks

And what's more important, since you care NFV seriously. I would really suggest you to draft a pmd for vhost-pci and use it to for benchmarking. It's real life case. OVS dpdk is known for not optimized for kernel drivers.

Good performance number can help us to examine the correctness of both design and implementation.

Thanks



reply via email to

[Prev in Thread] Current Thread [Next in Thread]