qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] hw/pci: disable pci-bridge's shpc by default


From: Andrew Jones
Subject: Re: [Qemu-devel] [PATCH] hw/pci: disable pci-bridge's shpc by default
Date: Fri, 18 Nov 2016 16:52:01 +0100
User-agent: Mutt/1.6.0.1 (2016-04-01)

On Wed, Nov 16, 2016 at 07:05:25PM +0200, Marcel Apfelbaum wrote:
> On 11/16/2016 06:44 PM, Andrew Jones wrote:
> > On Sat, Nov 05, 2016 at 06:46:34PM +0200, Marcel Apfelbaum wrote:
> > > On 11/03/2016 09:40 PM, Michael S. Tsirkin wrote:
> > > > On Thu, Nov 03, 2016 at 01:05:44PM +0200, Marcel Apfelbaum wrote:
> > > > > On 11/03/2016 06:18 AM, Michael S. Tsirkin wrote:
> > > > > > On Wed, Nov 02, 2016 at 05:16:42PM +0200, Marcel Apfelbaum wrote:
> > > > > > > The shpc component is optional while  ACPI hotplug is used
> > > > > > > for hot-plugging PCI devices into a PCI-PCI bridge.
> > > > > > > Disabling the shpc by default will make slot 0 usable at boot time
> > > > > 
> > > > > Hi Michael
> > > > > 
> > > > > > 
> > > > > > at the cost of breaking all hotplug for all non-acpi users.
> > > > > > 
> > > > > 
> > > > > Do we have a non-acpi user that is able to use the shpc component 
> > > > > as-is today?
> > > > 
> > > > power and some arm systems I guess?
> > > > 
> > > 
> > > Adding Andrew , maybe he can give us an answer.
> > 
> > Not really :-) My lack of PCI knowledge makes that difficult. I'd be happy
> > to help with an experiment though. Can you give me command line arguments,
> > qmp commands, etc. that I should use to try it out? I imagine I should
> > just boot an ARM guest using DT (instead of ACPI) and then attempt to
> > hotplug a PCI device. I'm not sure, however, what, if any, special
> > configuration I need in order to ensure I'm testing what you're
> > interested in.
> > 
> 
> Hi Drew,
> 
> 
> Just run QEMU with '-device pci-bridge,chassis_nr=1,id=bridge1 -monitor stdio'
> with an ARM guest using DT and wait until the guest finish booting.
> 
> Then run at hmp:
> device_add virtio-net-pci,bus=bridge1,id=net2
> 
> Next run lspci in the guest to see the new device.

Thanks for the instructions Marcel. Here's the results

 $QEMU -machine virt,accel=$ACCEL -cpu $CPU -nographic -m 4096 -smp 8 \
       -bios /usr/share/AAVMF/AAVMF_CODE.fd \
       -device pci-bridge,chassis_nr=1,id=bridge1 \
       -drive file=$FEDORA_IMG,if=none,id=dr0,format=qcow2 \
       -device virtio-blk-pci,bus=bridge1,addr=01,drive=dr0,id=disk0 \
       -netdev user,id=hostnet0 \
       -device virtio-net-pci,bus=bridge1,addr=02,netdev=hostnet0,id=net0

 # lspci
 00:00.0 Host bridge: Red Hat, Inc. Device 0008
 00:01.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
 01:01.0 SCSI storage controller: Red Hat, Inc Virtio block device
 01:02.0 Ethernet controller: Red Hat, Inc Virtio network device

 (qemu) device_add virtio-net-pci,bus=bridge1,id=net2
 Unsupported PCI slot 0 for standard hotplug controller. Valid slots are
 between 1 and 31.

(Tried again giving addr=03)

 (qemu) device_add virtio-net-pci,bus=bridge1,id=net2,addr=03

(Seemed to work, but...)

 # lspci
 00:00.0 Host bridge: Red Hat, Inc. Device 0008
 00:01.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
 01:01.0 SCSI storage controller: Red Hat, Inc Virtio block device
 01:02.0 Ethernet controller: Red Hat, Inc Virtio network device

(Doesn't show up in lscpi. So I guess it doesn't work)

> 
> 
> BTW, will an ARM guest run 'fast' enough to be usable on a x86 machine?
> If yes, any pointers on how to create such a guest?

You can run AArch64 guests on x86 machines. It's not super fast though...
Certainly I wouldn't want to create my guest image using TCG. So, assuming
you acquire an image somewhere (or create it on a real machine), then you
can use the above command line, just change 

ACCEL=kvm CPU=host to ACCEL=tcg CPU=cortex-a57

Thanks,
drew



reply via email to

[Prev in Thread] Current Thread [Next in Thread]