qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 4/5] x86: Allow physical address bits to be set


From: Marcel Apfelbaum
Subject: Re: [Qemu-devel] [PATCH 4/5] x86: Allow physical address bits to be set
Date: Mon, 20 Jun 2016 14:13:17 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.5.0

On 06/20/2016 01:42 PM, Igor Mammedov wrote:
On Sun, 19 Jun 2016 19:13:17 +0300
Marcel Apfelbaum <address@hidden> wrote:

On 06/17/2016 07:07 PM, Laszlo Ersek wrote:
On 06/17/16 11:52, Igor Mammedov wrote:
On Fri, 17 Jun 2016 11:17:54 +0200
Gerd Hoffmann <address@hidden> wrote:

On Fr, 2016-06-17 at 10:43 +0200, Paolo Bonzini wrote:

On 17/06/2016 10:15, Dr. David Alan Gilbert wrote:
Larger is a problem if the guest tries to map something to a
high address that's not addressable.

Right.  It's not a problem for most emulated PCI devices (it
would be a problem for those that have large RAM BARs, but even
our emulated video cards do not have 64-bit RAM BARs, I think;

qxl can be configured to have one, try "-device
qxl-vga,vram64_size_mb=1024"

     2) While we have maxmem settings to tell us the top of VM
RAM, do we have anything that tells us the top of IO space?
What happens when we hotplug a PCI card?

(arch/x86/kernel/setup.c) but I agree that (2) is a blocker.

seabios maps stuff right above ram (possibly with a hole due to
alignment requirements).

ovmf maps stuff into a 32G-aligned 32G hole.  Which lands at 32G
and therefore is addressable with 36 bits, unless you have tons
of ram (> 30G) assigned to your guest.  A physical host machine
where you can plug in enough ram for such a configuration likely
has more than 36 physical address lines too ...

qemu checks where the firmware mapped 64bit bars, then adds those
ranges to the root bus pci resources in the acpi tables
(see /proc/iomem).

You don't know how the guest will assign PCI BAR addresses, and
as you said there's hotplug too.

Not sure whenever qemu adds some extra space for hotplug to the
64bit hole and if so how it calculates the size then.  But the
guest os should stick to those ranges when configuring hotplugged
devices.
currently firmware would assign 64-bit BARs after
reserved-memory-end (not sure about ovmf though)

OVMF does the same as well. It makes sure that the 64-bit PCI MMIO
aperture is located above "etc/reserved-memory-end", if the latter
exists.

but QEMU on ACPI side will add 64-bit _CRS only
for firmware mapped devices (i.e. no space reserved for hotplug).
And is I recall correctly ovmf won't map BARs if it doesn't have
a driver for it

Yes, that's correct, generally for all UEFI firmware.

More precisely, BARs will be allocated and programmed, but the MMIO
space decoding bit will not be set (permanently) in the device's
command register, if there is no matching driver in the firmware
(or in the device's own oprom).

so ACPI tables won't even have a space for not mapped
64-bit BARs.

This used to be true, but that's not the case since
<https://github.com/tianocore/edk2/commit/8f35eb92c419>.

Namely, specifically for conforming to QEMU's ACPI generator, OVMF
*temporarily* enables, as a platform quirk, all PCI devices present
in the system, before triggering QEMU to generate the ACPI payload.

Thus, nowadays 64-bit BARs work fine with OVMF, both for
virtio-modern devices, and assigned physical devices. (This is very
easy to test, because, unlike SeaBIOS, the edk2 stuff built into
OVMF prefers to allocate 64-bit BARs outside of the 32-bit address
space.)

Devices behind PXBs are a different story, but Marcel's been looking
into that, see
<https://bugzilla.redhat.com/show_bug.cgi?id=1323976>.

There was another attempt to reserve more space in _CRS
    https://lists.nongnu.org/archive/html/qemu-devel/2016-05/msg00090.html

That's actually Marcel's first own patch set for addressing
RHBZ#1323976 that I mentioned above (see it linked in
<https://bugzilla.redhat.com/show_bug.cgi?id=1323976#c2>).

It might have wider effects, but it is entirely motivated, to my
knowledge, by PXB. If you don't have extra root bridges, and/or you
plug all your devices with 64-bit MMIO BARs into the
"main" (default) root bridge, then (I believe) that patch set is
not supposed to make any difference. (I could be wrong, it's been a
while since I looked at Marcel's work!)


Patch 3 and 4 indeed are for PXB only. but patch 'pci: reserve 64 bit
MMIO range for PCI hotplug' (see
https://lists.nongnu.org/archive/html/qemu-devel/2016-05/msg00091.html)
tries to reserve [above_4g_mem_size, max_addressable_cpu_bits] range
for PCI hotplug.
it should be [reserved-memory-end, max_addressable_cpu_bits]


Right, thanks, actually the patch works like you pointed out.

Thanks,
Marcel


The implementation is not good enough because the number of
addressable bits is hard-coded. However, we have now David's wrapper
I can use.


Thanks,
Marcel







Thanks
Laszlo







reply via email to

[Prev in Thread] Current Thread [Next in Thread]