qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and d


From: Andrew Jones
Subject: Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
Date: Thu, 18 Aug 2016 09:41:03 +0200
User-agent: Mutt/1.6.0.1 (2016-04-01)

On Wed, Aug 17, 2016 at 01:00:05PM -0400, Laine Stump wrote:
> On 08/17/2016 12:13 PM, Andrew Jones wrote:
> > On Wed, Aug 17, 2016 at 08:08:11PM +0800, Kevin Zhao wrote:
> > > Hi all,
> > >       Now I'm investigating net device hot plug and disk hotplug for
> > > AArch64. For virtio , the default address is virtio-mmio. After Libvirt
> > > 1.3.5, user can explicitly specify the address-type to pci and so libvirt
> > > will pass the virtio-pci parameters to the Qemu.
> > >       Both my host and guest OS is Debian8, and Qemu version is 2.6.0.
> > > Libvirt version is 1.3.5.
> > >       For net-device, I change the address-type to pci, and libvirt pass 
> > > the
> > > command below:
> > >       -device
> > > virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:0d:25:25,bus=pci.2,addr=0x1
> > > 
> > >       After booting, the eth0 device disappear(eth0 occur when the address
> > > is virtio-mmio),
> > > but I can find another net-device enp2s1, also it can't work for dhcp:
> > > Running lspci: 02:01.0 Ethernet controller: Red Hat, Inc Virtio network
> > > device
> > > I'm not sure whether it worked.
> > > 
> > >       For disk device,* when I change the address-type to pci, the whole
> > > qemu command is :*
> > > https://paste.fedoraproject.org/409553/,  but the VM can not boot
> > > successfully. Does Qemu not support device disk of virtio-pci in AArch64
> > > just as it in X86_64?
> > >       Thanks~Since I am not very familiar with Qemu, really looking 
> > > forward
> > > to your response.
> > > 
> > > Best Regards,
> > > Kevin Zhao
> > libvirt 1.3.5 is a bit old. Later versions no longer unconditionally add
> > the i82801b11 bridge, which was necessary to use PCI devices with the PCIe
> > host bridge mach-virt has. IMO, libvirt and qemu still have a long way to
> > go in order to configure a base/standard mach-virt PCIe machine.
> 
> Well, you can do it now, but you have to manually assign the PCI addresses
> of devices (and if you want hotplug you need to live with Intel/TI-specific
> PCIe controllers).
> 
> 
> > 
> > 1) If we want to support both PCIe devices and PCI, then things are messy.
> >     Currently we propose dropping PCI support. mach-virt pretty much
> >     exclusively uses virtio, which can be set to PCIe mode (virtio-1.0)
> 
> I have a libvirt patch just about ACKed for pushing upstream that will
> automatically assign virtio-pci devices to a PCIe slot (if the qemu binary
> supports virtio-1.0):
> 
> https://www.redhat.com/archives/libvir-list/2016-August/msg00852.html
> 
> Separate patches do the same for the e1000e emulated network device (which
> you probably don't care about) and the nec-usb-xhci (USB3) controller (more
> useful):
> 
> https://www.redhat.com/archives/libvir-list/2016-August/msg00732.html
> 

Thanks for the update Laine. This sounds great to me. With those patches
we can switch from virtio-mmio to virtio-pci easily, even if we're still
missing hotplug a bit longer. What limit do we have for the number of
devices, when we don't have any switches? I think I experimented once and
found it to be 7.

> Once these are in place, the only type of device of any consequence that I
> can see still having no PCIe alternative is audio (even though only the
> virgl video device is PCIe, libvirt has always assigned the primary video to
> slot 1 on pcie-root anyway (although you shouldn't put a legacy PCI device
> on a pcie-root-port or pcie-switch-downstream-port, it is acceptable to plug
> it directly into pcie-root (as long as you know you won't need to hotplug
> it).
> 
> > 2) root complex ports, switches (upstream/downstream ports) are currently
> >     based on Intel parts. Marcel is thinking about creating generic models.
> 
> I say this every time it comes up, so just to be consistent: +1 :-)
> 
> > 3) libvirt needs to learn how to plug everything together, in proper PCIe
> >     fashion, leaving holes for hotplug.
> 
> See above about virtio, although that doesn't cover the whole story. The
> other part (which I'm working on right now) is that libvirt needs to
> automatically add pcie-root-port, pcie-switch-upstream-port, and
> pcie-switch-downstream-port devices as necessary. With the patches I
> mentioned above, you still have to manually add enough pcie-*-port
> controllers to the config, and then libvirt will plug the PCIe devices into
> the right place. This is simple enough to do, but it does require
> intervention.

OK, so we want this to support hotplug and eventually chain switches,
bumping our device limit up higher and higher. To what? I'm not sure,
I guess we're still limited by address space.

> 
> As far as leaving holes for hotplug, there's actually still a bit of an open
> question there - with machinetypes that use only legacy-PCI, *all* slots are
> hotpluggable, and they're added 31 at a time, so there was never any
> question about which slots were hotpluggable, and it would be very rare to
> end up with a configuration that had 0 free slots available for hotplug
> (actually libvirt would always make sure there was at least one, but in
> practice there would be many more open slots). With PCIe-capable
> machinetypes that is changed, since the root complex (pcie-root) doesn't
> support hotplug, and new slots are added 1 at a time (pcie-*-port) rather
> than 31 at a time. This means you have to really go out of your way if you
> want open slots for hotplug (and even if you want devices in the machine at
> boot time to be hot-unpluggable).
> 
> I'm still not sure just how far we need to go in this regard.  We've already
> decided that, unless manually set to an address on pcie-root by the
> user/management application, all PCI devices will be auto-assigned to a slot
> that supports hotplug. What I'm not sure about is whether we should always
> auto-add an extra pcie-*-root to be sure a device can be hotplugged, or if
> we should admit that 1 available slot isn't good enough for all situations,
> so we should instead just leave it up to the user/management to manually add
> extra ports if they think they'll want to hotplug something later.

Hmm... Maybe the tools can make this easier by offering an option to
provide N extra ports.

Hmm2... I think I agree that we don't need to worry too much about
providing free ports for hotplug (maybe just one for fun). With
virtio-scsi we can plug disks already. If we want to provide multiple
virtio-net devices for the price of one port, we can enable mutlifunction.
And, iirc, there's work to get ARI functioning, allowing mutlifunction to
go nuts (assuming I understand its purpose correctly)

So maybe the default config just needs 3 ports?
 1 virtio-scsi with as many disks as requested
 1 virtio-net with as many functions as nics are requested
 1 extra port

Thanks,
drew



reply via email to

[Prev in Thread] Current Thread [Next in Thread]