qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] GSoC 2011: S3 Trio, AHCI


From: Paul Brook
Subject: Re: [Qemu-devel] GSoC 2011: S3 Trio, AHCI
Date: Wed, 6 Apr 2011 23:54:47 +0100
User-agent: KMail/1.13.5 (Linux/2.6.38-2-amd64; KDE/4.4.5; x86_64; ; )

> > Last year, I was also interested in working on S3 Trio emulation. This
> > year, the same idea is on the ideas list. The hardware is pretty
> > thoroughly documented through source code and textual documentation, and
> > I'm already familiar with adding PCI devices to Qemu, so I do see a
> > rough outline of how I would implement it.
> > 
> > However, last year, Paul Brook commented [1] that he wasn't convinced
> > about the usefulness of emulating an S3 Trio or Virge card, because of
> > performance reasons. He suggested that accelerating the 2D engine would
> > be tricky because the framebuffer is exposed to the guest. This might be
> > just me not fully understanding his point, but isn't this also the case
> > with the Cirrus Logic GD5446 card?
> > 
> > He also suggested paravirtualization for 3D acceleration. Do you think
> > it would make a good summer project?
> 
> I can't comment on these issues, CC'ing Paul, Anthony and Stefan.

My understanding is that Cirrus logic cards also have 2D acceleration.  We 
implement this in qemu, but not in a way that's likely to be fast.  I don't 
really know either card in detail, but they're both a similar age, so I'd 
expect the functionality to be fairly similar.

The 2D engines you're talking about are of questionable benefit.  IIUC They're 
basically a memcpy engine with some weird bitmasking operations that line up 
with the windows 3 GDI raster ops.  While accelerating this maybe made sense 
on a 386, it's not worth the effort on modern CPUs.  The latency and overhead 
of setting up and syncronising with the async blit engine is greater than the 
cost of just doing it in software.  In practice modern desktop environments 
just use the 3D engine.

IMO emulating useful 'real' 3D hardware is not feasible.  In theory you could 
emulate an old card, however these are also of limited practical benefit.  For 
the S3 cards the 3D engine is so crippled that even when new it wasn't worth 
using.  You could maybe implement an old fixed-function card like, e.g. an 
i810 or 3dfx card, however drivers for these are also getting hard to come by, 
and the functionality is still limited.  You basically get raster offloading, 
and everything else is done in software.  Emulation overhead may be greater 
than useful offloaded work.

For good 3D support you're looking at something shader based.  Emulating real 
hardware is not going to happen.  With real hardware the interface qemu needs 
to emulate is directly tied to the implementation details of that particular 
chipset.  The guest driver generally uses intimate knowledge of these 
implementation details (e.g. vram layout, shader ISA).  Different 
implementations may provide the same high-level functionality, however the 
low-level implementations are very different.  Reconstructing high-level 
operations from the low-level stream is extremely hard, probably harder than 
the main CPU emulation that qemu does.

IMO a good rule of thumb is that the output of the render pipeline should not 
be guest visible.  Anything where the guest can observe/manipulate the output 
or intermediate results makes it very hard to isolate the guest from the 
implementation details (i.e. whatever hardware acceleration the host 
provides).

There are already a handful of different paravirtual graphics drivers, of 
varying quality and openness.  This includes:

- Several OpenGL passthrough drivers.  These are effectively just re-
implementing GLX, often badly.  I suspect that given a decent virtual network, 
remote X (including 3D via GLX) already works pretty well.

- SPICE. IIUC this is an ugly hack that maps directly onto legacy Windows/GDI 
operations.  I'm not aware of any substantive plan for making this work well 
in other environments (using the subset that's basically a dumb framebuffer 
doesn't count), or for doing 3D.

- Whatever VMware uses.

- Whatever VirtualBox uses.

- At least two gallium3D based projects.  I think this includes Xen, and 
possibly VirtualBox.  Given the whole point of Gallium3D is to provide a 
common abstraction layer between the application API and the hardware this 
would be my choice.

Paul



reply via email to

[Prev in Thread] Current Thread [Next in Thread]