qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] hw/pci: disable pci-bridge's shpc by default


From: Marcel Apfelbaum
Subject: Re: [Qemu-devel] [PATCH] hw/pci: disable pci-bridge's shpc by default
Date: Tue, 22 Nov 2016 19:26:32 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.1.1

On 11/18/2016 05:52 PM, Andrew Jones wrote:
On Wed, Nov 16, 2016 at 07:05:25PM +0200, Marcel Apfelbaum wrote:
On 11/16/2016 06:44 PM, Andrew Jones wrote:
On Sat, Nov 05, 2016 at 06:46:34PM +0200, Marcel Apfelbaum wrote:
On 11/03/2016 09:40 PM, Michael S. Tsirkin wrote:
On Thu, Nov 03, 2016 at 01:05:44PM +0200, Marcel Apfelbaum wrote:
On 11/03/2016 06:18 AM, Michael S. Tsirkin wrote:
On Wed, Nov 02, 2016 at 05:16:42PM +0200, Marcel Apfelbaum wrote:
The shpc component is optional while  ACPI hotplug is used
for hot-plugging PCI devices into a PCI-PCI bridge.
Disabling the shpc by default will make slot 0 usable at boot time

Hi Michael


at the cost of breaking all hotplug for all non-acpi users.


Do we have a non-acpi user that is able to use the shpc component as-is today?

power and some arm systems I guess?


Adding Andrew , maybe he can give us an answer.

Not really :-) My lack of PCI knowledge makes that difficult. I'd be happy
to help with an experiment though. Can you give me command line arguments,
qmp commands, etc. that I should use to try it out? I imagine I should
just boot an ARM guest using DT (instead of ACPI) and then attempt to
hotplug a PCI device. I'm not sure, however, what, if any, special
configuration I need in order to ensure I'm testing what you're
interested in.


Hi Drew,


Just run QEMU with '-device pci-bridge,chassis_nr=1,id=bridge1 -monitor stdio'
with an ARM guest using DT and wait until the guest finish booting.

Then run at hmp:
device_add virtio-net-pci,bus=bridge1,id=net2

Next run lspci in the guest to see the new device.

Thanks for the instructions Marcel. Here's the results

 $QEMU -machine virt,accel=$ACCEL -cpu $CPU -nographic -m 4096 -smp 8 \
       -bios /usr/share/AAVMF/AAVMF_CODE.fd \
       -device pci-bridge,chassis_nr=1,id=bridge1 \
       -drive file=$FEDORA_IMG,if=none,id=dr0,format=qcow2 \
       -device virtio-blk-pci,bus=bridge1,addr=01,drive=dr0,id=disk0 \
       -netdev user,id=hostnet0 \
       -device virtio-net-pci,bus=bridge1,addr=02,netdev=hostnet0,id=net0

 # lspci
 00:00.0 Host bridge: Red Hat, Inc. Device 0008
 00:01.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
 01:01.0 SCSI storage controller: Red Hat, Inc Virtio block device
 01:02.0 Ethernet controller: Red Hat, Inc Virtio network device

 (qemu) device_add virtio-net-pci,bus=bridge1,id=net2
 Unsupported PCI slot 0 for standard hotplug controller. Valid slots are
 between 1 and 31.

(Tried again giving addr=03)

 (qemu) device_add virtio-net-pci,bus=bridge1,id=net2,addr=03

(Seemed to work, but...)

 # lspci
 00:00.0 Host bridge: Red Hat, Inc. Device 0008
 00:01.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
 01:01.0 SCSI storage controller: Red Hat, Inc Virtio block device
 01:02.0 Ethernet controller: Red Hat, Inc Virtio network device

(Doesn't show up in lscpi. So I guess it doesn't work)


Hi Drew,
Thanks for confirming that it doesn't work.

Michael asked if we can check the same for powerpc before
disabling the shpc by default.

Adding David, Thomas and Laurrent, maybe they have time
to check it for powerpc.

Your help would be very much appreciated.

Thanks,
Marcel



BTW, will an ARM guest run 'fast' enough to be usable on a x86 machine?
If yes, any pointers on how to create such a guest?

You can run AArch64 guests on x86 machines. It's not super fast though...
Certainly I wouldn't want to create my guest image using TCG. So, assuming
you acquire an image somewhere (or create it on a real machine), then you
can use the above command line, just change

ACCEL=kvm CPU=host to ACCEL=tcg CPU=cortex-a57

Thanks,
drew





reply via email to

[Prev in Thread] Current Thread [Next in Thread]