qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [snabb-devel:300] Re: snabbswitch integration with QEMU


From: Luke Gorrie
Subject: Re: [Qemu-devel] [snabb-devel:300] Re: snabbswitch integration with QEMU for userspace ethernet I/O
Date: Tue, 4 Jun 2013 14:19:23 +0200

Howdy,

My brain is slowly catching up with all of the information shared in this thread. Here is my first attempt to tease out a way forward for Snabb Switch.

The idea that excites me is to implement a complete PCI device in Snabb Switch and expose this to the guest at the basic PCI/MMIO/DMA level. The device would be a Virtio network adapter based on Rusty Russell's specification. The switch<->VM interface would be based on PCI rather than vhost.

I _think_ this is the basic idea that Stefano Stabellini and Julian Stecklina are talking about.

I like this because:

- The abstraction level is primarily PCI hardware devices (hardware) rather than system calls (kernel) as with vhost/socket/splice/etc. This is a much better fit for the Snabb Switch code, which is already doing physical network I/O based on built-in drivers built on PCI MMIO/DMA. I invest my energy in learning more about PCI and Virtio rather than Linux and QEMU.

- The code feels more generic. The software we develop is a standard Virtio PCI network device rather than a specific QEMU-vhost interface. In principle (...) we could reuse the same code with more hypervisors in the future.

- The code that I am not well positioned to write myself - the hypervisor side - may have already been written/prototyped by others and available for testing, even though it's not available in mainline QEMU.

I have some questions, if you don't mind:

1. Have I understood the idea correctly above? (Or what do I have wrong?)
2. Is this PCI integration available in some code base that I could test with? e.g. non-mainline QEMU, Xen, vbox, VMware, etc?
3. If I hack a proof-of-concept what is most likely to go wrong in an OpenStack context? I mean - the "memory hotplug" and "track what is dirty" issues that are alluded to. Is my code going to run slowly? drop packets? break during migration? crash VMs?

Long-term I do need a solution that works with standard mainline QEMU but I could also start with something more custom and revisit the whole issue next year. The most important thing now is to start making forward progress and have something working and performant this summer/autumn.

Cheers & thanks for all the information,
-Luke



reply via email to

[Prev in Thread] Current Thread [Next in Thread]