qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and d


From: Andrew Jones
Subject: Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
Date: Wed, 17 Aug 2016 18:13:03 +0200
User-agent: Mutt/1.6.0.1 (2016-04-01)

On Wed, Aug 17, 2016 at 08:08:11PM +0800, Kevin Zhao wrote:
> Hi all,
>      Now I'm investigating net device hot plug and disk hotplug for
> AArch64. For virtio , the default address is virtio-mmio. After Libvirt
> 1.3.5, user can explicitly specify the address-type to pci and so libvirt
> will pass the virtio-pci parameters to the Qemu.
>      Both my host and guest OS is Debian8, and Qemu version is 2.6.0.
> Libvirt version is 1.3.5.
>      For net-device, I change the address-type to pci, and libvirt pass the
> command below:
>      -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:0d:25:25,bus=pci.2,addr=0x1
> 
>      After booting, the eth0 device disappear(eth0 occur when the address
> is virtio-mmio),
> but I can find another net-device enp2s1, also it can't work for dhcp:
> Running lspci: 02:01.0 Ethernet controller: Red Hat, Inc Virtio network
> device
> I'm not sure whether it worked.
> 
>      For disk device,* when I change the address-type to pci, the whole
> qemu command is :*
> https://paste.fedoraproject.org/409553/,  but the VM can not boot
> successfully. Does Qemu not support device disk of virtio-pci in AArch64
> just as it in X86_64?
>      Thanks~Since I am not very familiar with Qemu, really looking forward
> to your response.
> 
> Best Regards,
> Kevin Zhao

libvirt 1.3.5 is a bit old. Later versions no longer unconditionally add
the i82801b11 bridge, which was necessary to use PCI devices with the PCIe
host bridge mach-virt has. IMO, libvirt and qemu still have a long way to
go in order to configure a base/standard mach-virt PCIe machine.

1) If we want to support both PCIe devices and PCI, then things are messy.
   Currently we propose dropping PCI support. mach-virt pretty much
   exclusively uses virtio, which can be set to PCIe mode (virtio-1.0)
2) root complex ports, switches (upstream/downstream ports) are currently
   based on Intel parts. Marcel is thinking about creating generic models.
3) libvirt needs to learn how to plug everything together, in proper PCIe
   fashion, leaving holes for hotplug.
4) Probably more... I forget all the different issues we discovered when
   we started playing with this a few months ago.

The good news is that x86 folk want all the same things for the q35 model.
mach-virt enthusiasts like us get to ride along pretty much for free.

So, using virtio-pci with mach-virt and libvirt isn't possible right now,
not without manual changes to the XML. It might be nice to document how to
manually convert a guest, so developers who want to use virtio-pci don't
have to abandon libvirt. I'd have to look into that, or ask one of our
libvirt friends to help. Certainly the instructions would be for latest
libvirt though.

Finally, FWIW, with a guest kernel of 4.6.4-301.fc24.aarch64. The
following qemu command line works for me.
(notice the use of PCIe), and my network interface gets labeled enp0s1.

$QEMU -machine virt-2.6,accel=kvm -cpu host \
 -m 1024 -smp 1 -nographic \
 -bios /usr/share/AAVMF/AAVMF_CODE.fd \
 -device ioh3420,bus=pcie.0,id=pcie.1,port=1,chassis=1 \
 -device ioh3420,bus=pcie.0,id=pcie.2,port=2,chassis=2 \
 -device 
virtio-scsi-pci,disable-modern=off,disable-legacy=on,bus=pcie.1,addr=00.0,id=scsi0
 \
 -drive 
file=/home/drjones/.local/libvirt/images/fedora.qcow2,format=qcow2,if=none,id=drive-scsi0-0-0-0
 \
 -device 
scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1
 \
 -netdev user,id=hostnet0 \
 -device 
virtio-net-pci,disable-modern=off,disable-legacy=on,bus=pcie.2,addr=00.0,netdev=hostnet0,id=net0

I prefer always using virtio-scsi for the disk, but a similar command
line can be used for a virtio-blk-pci disk.

Thanks,
drew



reply via email to

[Prev in Thread] Current Thread [Next in Thread]