qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and d


From: Kevin Zhao
Subject: Re: [Qemu-devel] Help: Does Qemu support virtio-pci for net-device and disk device?
Date: Thu, 18 Aug 2016 20:43:48 +0800

Hi Laine,
    Thanks :-) I also has a little questions below.

On 18 August 2016 at 01:00, Laine Stump <address@hidden> wrote:

> On 08/17/2016 12:13 PM, Andrew Jones wrote:
>
>> On Wed, Aug 17, 2016 at 08:08:11PM +0800, Kevin Zhao wrote:
>>
>>> Hi all,
>>>       Now I'm investigating net device hot plug and disk hotplug for
>>> AArch64. For virtio , the default address is virtio-mmio. After Libvirt
>>> 1.3.5, user can explicitly specify the address-type to pci and so libvirt
>>> will pass the virtio-pci parameters to the Qemu.
>>>       Both my host and guest OS is Debian8, and Qemu version is 2.6.0.
>>> Libvirt version is 1.3.5.
>>>       For net-device, I change the address-type to pci, and libvirt pass
>>> the
>>> command below:
>>>       -device
>>> virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:0d:25:
>>> 25,bus=pci.2,addr=0x1
>>>
>>>       After booting, the eth0 device disappear(eth0 occur when the
>>> address
>>> is virtio-mmio),
>>> but I can find another net-device enp2s1, also it can't work for dhcp:
>>> Running lspci: 02:01.0 Ethernet controller: Red Hat, Inc Virtio network
>>> device
>>> I'm not sure whether it worked.
>>>
>>>       For disk device,* when I change the address-type to pci, the whole
>>> qemu command is :*
>>> https://paste.fedoraproject.org/409553/,  but the VM can not boot
>>> successfully. Does Qemu not support device disk of virtio-pci in AArch64
>>> just as it in X86_64?
>>>       Thanks~Since I am not very familiar with Qemu, really looking
>>> forward
>>> to your response.
>>>
>>> Best Regards,
>>> Kevin Zhao
>>>
>> libvirt 1.3.5 is a bit old. Later versions no longer unconditionally add
>> the i82801b11 bridge, which was necessary to use PCI devices with the PCIe
>> host bridge mach-virt has. IMO, libvirt and qemu still have a long way to
>> go in order to configure a base/standard mach-virt PCIe machine.
>>
>
> Well, you can do it now, but you have to manually assign the PCI addresses
> of devices (and if you want hotplug you need to live with Intel/TI-specific
> PCIe controllers).

OK. It seems that Qemu will drop PCI for machine-virt and turning to PCIE
in the future.
Do I need to do more for  Intel/TI-specific PCIe controllers?  what do I
need to add in the guest XML or more?

>
>
>> 1) If we want to support both PCIe devices and PCI, then things are messy.
>>     Currently we propose dropping PCI support. mach-virt pretty much
>>     exclusively uses virtio, which can be set to PCIe mode (virtio-1.0)
>>
>
> I have a libvirt patch just about ACKed for pushing upstream that will
> automatically assign virtio-pci devices to a PCIe slot (if the qemu binary
> supports virtio-1.0):
>
> https://www.redhat.com/archives/libvir-list/2016-August/msg00852.html
>

What's the minimum version of  Qemu that support virito-1.0? Does Qemu 2.6
works?
Also as  I see your patch for automatically assign virtio-pci to PCIE
slot,after it merged  I think thing will go much more easier.
Now I will manually add the slots and bus to pcie. Because I am not
familiar with it,  if it convenient, could you give me an available xml
file which PCIE disk and PCIE
net device can work for machine virt ?

Thanks~

Separate patches do the same for the e1000e emulated network device (which
> you probably don't care about) and the nec-usb-xhci (USB3) controller (more
> useful):
>
> https://www.redhat.com/archives/libvir-list/2016-August/msg00732.html
>
> Once these are in place, the only type of device of any consequence that I
> can see still having no PCIe alternative is audio (even though only the
> virgl video device is PCIe, libvirt has always assigned the primary video
> to slot 1 on pcie-root anyway (although you shouldn't put a legacy PCI
> device on a pcie-root-port or pcie-switch-downstream-port, it is acceptable
> to plug it directly into pcie-root (as long as you know you won't need to
> hotplug it).
>
> 2) root complex ports, switches (upstream/downstream ports) are currently
>>     based on Intel parts. Marcel is thinking about creating generic
>> models.
>>
>
> I say this every time it comes up, so just to be consistent: +1 :-)
>
> 3) libvirt needs to learn how to plug everything together, in proper PCIe
>>     fashion, leaving holes for hotplug.
>>
>
> See above about virtio, although that doesn't cover the whole story. The
> other part (which I'm working on right now) is that libvirt needs to
> automatically add pcie-root-port, pcie-switch-upstream-port, and
> pcie-switch-downstream-port devices as necessary. With the patches I
> mentioned above, you still have to manually add enough pcie-*-port
> controllers to the config, and then libvirt will plug the PCIe devices into
> the right place. This is simple enough to do, but it does require
> intervention.
>
> As far as leaving holes for hotplug, there's actually still a bit of an
> open question there - with machinetypes that use only legacy-PCI, *all*
> slots are hotpluggable, and they're added 31 at a time, so there was never
> any question about which slots were hotpluggable, and it would be very rare
> to end up with a configuration that had 0 free slots available for hotplug
> (actually libvirt would always make sure there was at least one, but in
> practice there would be many more open slots). With PCIe-capable
> machinetypes that is changed, since the root complex (pcie-root) doesn't
> support hotplug, and new slots are added 1 at a time (pcie-*-port) rather
> than 31 at a time. This means you have to really go out of your way if you
> want open slots for hotplug (and even if you want devices in the machine at
> boot time to be hot-unpluggable).
>
> I'm still not sure just how far we need to go in this regard.  We've
> already decided that, unless manually set to an address on pcie-root by the
> user/management application, all PCI devices will be auto-assigned to a
> slot that supports hotplug. What I'm not sure about is whether we should
> always auto-add an extra pcie-*-root to be sure a device can be hotplugged,
> or if we should admit that 1 available slot isn't good enough for all
> situations, so we should instead just leave it up to the user/management to
> manually add extra ports if they think they'll want to hotplug something
> later.
>
>
> 4) Probably more... I forget all the different issues we discovered when
>>     we started playing with this a few months ago.
>>
>> The good news is that x86 folk want all the same things for the q35 model.
>> mach-virt enthusiasts like us get to ride along pretty much for free.
>>
>> So, using virtio-pci with mach-virt and libvirt isn't possible right now,
>> not without manual changes to the XML. It might be nice to document how to
>> manually convert a guest, so developers who want to use virtio-pci don't
>> have to abandon libvirt. I'd have to look into that, or ask one of our
>> libvirt friends to help. Certainly the instructions would be for latest
>> libvirt though.
>>
>> Finally, FWIW, with a guest kernel of 4.6.4-301.fc24.aarch64. The
>> following qemu command line works for me.
>> (notice the use of PCIe), and my network interface gets labeled enp0s1.
>>
>> $QEMU -machine virt-2.6,accel=kvm -cpu host \
>>   -m 1024 -smp 1 -nographic \
>>   -bios /usr/share/AAVMF/AAVMF_CODE.fd \
>>   -device ioh3420,bus=pcie.0,id=pcie.1,port=1,chassis=1 \
>>   -device ioh3420,bus=pcie.0,id=pcie.2,port=2,chassis=2 \
>>   -device 
>> virtio-scsi-pci,disable-modern=off,disable-legacy=on,bus=pcie.1,addr=00.0,id=scsi0
>> \
>>   -drive file=/home/drjones/.local/libvirt/images/fedora.qcow2,format
>> =qcow2,if=none,id=drive-scsi0-0-0-0 \
>>   -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-sc
>> si0-0-0-0,id=scsi0-0-0-0,bootindex=1 \
>>   -netdev user,id=hostnet0 \
>>   -device virtio-net-pci,disable-modern=off,disable-legacy=on,bus=pcie
>> .2,addr=00.0,netdev=hostnet0,id=net0
>>
>> I prefer always using virtio-scsi for the disk, but a similar command
>> line can be used for a virtio-blk-pci disk.
>>
>> Thanks,
>> drew
>>
>
>
>


reply via email to

[Prev in Thread] Current Thread [Next in Thread]