qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] selecting VIRTIO_INPUT and VIRTIO_VGA


From: Laszlo Ersek
Subject: Re: [Qemu-devel] selecting VIRTIO_INPUT and VIRTIO_VGA
Date: Sun, 26 Jul 2015 11:31:10 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0

On 07/25/15 11:49, Gerd Hoffmann wrote:
>   Hi,
> 
>>> I agree. Also, as far as I understood Marc, his hope was that the fix to 
>>> halfway working VGA emulation would be virtio-gpu.
> 
> Note we have both virtio-vga and virtio-gpu-pci.  virtio-vga has vga
> compatibility built-in, otherwise the two are identical.  virtio-gpu-pci
> is enabled along with all other virtio drivers, so arm + aarch64 have
> that already.
> 
>> 2) Use the fact that there is actually hardly any legacy for ARM VMs,
>> and embrace paravirtualized devices entirely. We do it for disks,
>> network interfaces. Why not display? Why not input?
> 
> We have both now (qemu 2.4+, linux 4.1+ for input, linux 4.2+ for gpu).
> Works just fine on arm (tcg tested).  aarch64 not yet (with vanilla
> upstream linux kernel) due to lack of generic pci host support.
> 
>> Using VGA makes sense on x86 because this is a standard on that
>> platform. Every system has one. You can't expect the same thing on ARM
>> (evil persons would even say that you can't expect anything at all). So
>> let's take this opportunity to use the best tool for the job. Virtio
>> fits that bill pretty well apparently.
> 
> Big question is (a) whenever we need a firmware framebuffer and (b) how
> to implement that best.
> 
> virtio-vga/virtio-gpu-pci in paravirt (native) mode requires the guest
> explicitly request screen updates.  There is no dirty page tracking, and
> guest writes to memory do *not* magically appear on the screen.  I don't
> think implementing a EFI driver for that is going to fly.

The EFI_GRAPHICS_OUTPUT_PROTOCOL structure has a function pointer member
called Blt:

  Blt -- Software abstraction to draw on the video device’s frame
         buffer.
  [...]
         Blt a rectangle of pixels on the graphics screen. Blt stands
         for BLock Transfer.

And, one of the enumeration constants that are possible for the
EFI_GRAPHICS_OUTPUT_PROTOCOL.Mode->Info->PixelFormat field is:

  PixelBltOnly -- This mode does not support a physical frame buffer.

Therefore, strictly for working before ExitBootServices(), a UEFI_DRIVER
module could be implemented that exposed a "blit-only" interface. I have
never tested if the higher level graphics stack in edk2 would work with
that; I guess it might. And, if we force all display accesses through
Blt(), then all the necessary virtio stuff could be done in there, I guess.

The problem is however runtime OS support, after ExitBootServices().
Take for example Windows 8 or Linux (without specific video drivers) on
a UEFI system. The respective boot loader or stub (launched as a UEFI
application) is smart enough to save the framebuffer characteristics for
the OS, *if* there is a physical framebuffer, and then the OS can use a
generic "efifb" driver, directly accessing the video RAM. For Windows 8
and later, this was the only way to have graphics when booting on top of
OVMF, at least until Vadim Rozenfeld completed the QXL WDDM driver.

In brief: PixelBltOnly would be *probably* okay until
ExitBootServices(), but without a physical frame buffer, UEFI operating
systems without native virtio-gpu-pci drivers could not display graphics.

(Recall Windows 7, and the VBE shim we came up with for it -- if the OS
doesn't have graphics after ExitBootServices(), either because the OS is
broken (Windows 7) or because the display is PixelBltOnly, then it can't
even be installed. You can select storage drivers mid-installation
(which runs after ExitBootServices()), but not video.)

> virtio-vga in vga-compat mode uses a framebuffer with the usual dirty
> tracking logic in pci bar 0 (simliar to stdvga).  Which is exactly the
> thing causing the cache coherency issues on aarch64 if I understand
> things correctly.

Yes. :(

> Programming (modesetting) works without legacy vga io
> ports, you can use the mmio regs in pci bar 1 instead (applies to both
> virtio-vga and stdvga btw), and QemuVideoDxe actually uses the mmio bar.

True.

But, as a side point, let me talk a bit about the outb() function in
OvmfPkg/QemuVideoDxe/Driver.c. It (very correctly for a UEFI_DRIVER
module!) uses PciIo->Io.Write() to write to IO ports.

Now, the PciIo protocol implementation is platform independent. In
practice it forwards IO space accesses to the EFI_PCI_ROOT_BRIDGE_IO
protocol. And *that* one is platform-dependent.

For x86 virtual machines, those accesses are turned into IO port
accesses. However, the EFI_PCI_ROOT_BRIDGE_IO implementation in
ArmVirtPkg/PciHostBridgeDxe/, which is built into AAVMF and runs on the
"virt" machtype, maps the IO space and the IO port accesses to a special
(fake) MMIO range of 64K "ports".

In QEMU this memory region corresponds to VIRT_PCIE_PIO, in
"hw/arm/virt.c". See create_pcie():

    hwaddr base_pio = vbi->memmap[VIRT_PCIE_PIO].base;

    ...

    /* Map IO port space */
    sysbus_mmio_map(SYS_BUS_DEVICE(dev), 2, base_pio);

This range is advertized in the DTB that QEMU exports to AAVMF, which is
how AAVMF knows how to do the translation.

I believe such an emulated IO space was necessary for most QEMU device
models in the first place (I guess quite a few of them must have a hard
IO space dependency). Now that it's there, we can drive it from AAVMF.
(Whether the IO space emulation is temporary or here to stay in QEMU, I
don't know.)

Anyhow, this wall of text is just to say: *if* QemuVideoDxe, built for
AAVMF, had to fall back to legacy VGA IO ports, for whatever reason, it
would be capable of that. The PciIo->Io.Write() accesses made in
QemuVideoDxe would be "diverted" by ArmVirtPkg/PciHostBridgeDxe to the
special MMIO range. (Are abstractions awesome or what?! :))

Thanks
Laszlo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]