qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 0/7] ramfb: simple boot framebuffer, no legac


From: Laszlo Ersek
Subject: Re: [Qemu-devel] [PATCH v2 0/7] ramfb: simple boot framebuffer, no legacy vga
Date: Thu, 31 May 2018 11:11:51 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0

On 05/31/18 10:43, Gerd Hoffmann wrote:
>   Hi,
> 
> Resuming an old discussion ...
> 
>>> From
>>> the guests point of view there is no difference between
>>>
>>>   (a) qemu -device virtio-ramfb, and
>>>   (b) qemu -device virtio-gpu-pci -device ramfb-testdev
>>>
>>> On the host side the difference is that (a) is a single QemuConsole
>>> which shows virtio-gpu-pci once activated and ramfb otherwise, and
>>> (b) is two QemuConsoles, so you can see both virtio-gpu-pci and ramfb
>>> side-by-side, for debugging purposes.
>>
>> Exactly this "multiple frontends, single backend" connection is the
>> problem. In UEFI, it is possible to establish a priority order between
>> drivers that are all capable of binding the same controller ("handle"),
>> but especially with ramfb + another (PCI) video frontend, it's the
>> "handles" that are different. The "priority mechanism" would have no
>> idea that the drivers cannot peacefully coexist, i.e. it's clueless
>> about the (host side only) competition.
> 
> Well, virtio-vga and qxl-vga are very simliar.  They both are two-in-one
> devices, with legacy vga frontend and native (qxl/virtio) frontend
> sharing a single backend.  When the guest initialized the native
> frontend the backend switches over from vga to native.

True -- the difference is however, that the firmware doesn't even try to
drive the native QXL frontend; it's clueless about it.

IOW, in the (QXL, VGA) two-in-one device, the firmware only sees VGA,
because the firmware entirely (globally) ignores the QXL frontend; it
has no driver for native QXL.

We couldn't do that for virtio-gpu (we really wanted to drive it for
aarch64's sake), so the (virtio-gpu, VGA) two-in-one device required
special hacks, to prevent double-binding.

[snip]

>> I could imagine an OvmfPkg-specific PCI capability that said, "all PCI
>> drivers in OvmfPkg that could otherwise drive this device, ignore it --
>> another (platform) driver in OvmfPkg will pick it up instead".
> 
> pci capability for ramfb could be useful (also for linux).  I'll keep it
> in mind for now.

Please do. :)

When you brought up the PCI capability last time in this thread (and I
liked it), I realized that scanning for this new (likely "vendor")
capability would require me to code up the *third* PCI caplist scanning
loop in OVMF.

(Until that point we had implemented two such scans, one in the
virtio-1.0 driver, because virtio-1.0 uses vendor capabilities
liberally, and the other one in PciHotPlugInitDxe, which looks for the
PCI resource reservation hints on bridges, for hotplug purposes.)

After I had added the 2nd such scan (in PciHotPlugInitDxe), Jordan
suggested that we should rather use some helper library to save us the
manual fiddling with the capability headers and contents.

So, when you brought up PCI capabilities the last time in this thread,
the foreseeable "third scan" got stuck in my mind, and it ultimtely
spurred me to write that helper library, originally suggested by Jordan.
It's been upstream for a week now, and both Virtio10Dxe and
PciHotPlugInitDxe have been converted to use it (commit range
4b8552d794e7..5685a243b6f8). (Thank you Ard again for the review.)

If you add the above-suggested

  "hands off for platform drivers' sake"

capability in QEMU, I think we'll be able to locate and parse it cleanly
in the OVMF PCI drivers that need to honor it.

Thanks!
Laszlo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]