qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] qemu VGA endian swap low level drawing changes


From: Benjamin Herrenschmidt
Subject: Re: [Qemu-devel] [RFC] qemu VGA endian swap low level drawing changes
Date: Tue, 17 Jun 2014 20:57:46 +1000

On Tue, 2014-06-17 at 11:19 +0100, Peter Maydell wrote:
> On 17 June 2014 11:09, Benjamin Herrenschmidt <address@hidden> wrote:
> > On Tue, 2014-06-17 at 12:00 +0200, Greg Kurz wrote:
> >> There has been a discussion already about virtio endianness: relying on
> >> a guest system wide setting such as LPCR_ILE has been strongly rejected
> >> at the time... The consensus for virtio is "device endianness is the
> >> endianness of the CPU that does the reset" (hence MSR_LE for PPC).
> >
> > How on earth did anybody reach such a conclusion ? Of all the possible
> > options I can think of this is the one that makes the *less* sense !
> 
> Well, the right conclusion is "virtio should have specified
> endianness sanely, ie to be independent of the CPU". However
> we can't rewind time to make that decision correctly, so this
> is the best we can usefully do without breaking existing
> working guests.

Agreed about original breakage.

> It's absolutely a virtio-specific thing, though, given that
> virtio has the weird "endianness of the guest" semantics in
> it, so it's not a good model for anything else.
> 
> My personal opinion here is that device models should just
> have a fixed byte order, and the guest should deal with it
> (except in the cases where we're modelling real hardware
> which has a real config register for flipping byte order,
> in which case the answer is "work like that hardware").
> So I'm still really dubious about adding endian swapping to
> the VGA model at all. Why can't you just have the guest
> do the right thing?

I absolutely agree with you on the fixed byte order model.

Sadly, graphics carries a very long legacy of brokenness in that area
that we really can't fix easily.

X assumes at compile time pretty much a native byte order and expresses
color components locations as masks and shifts inside the N-bit "words"
based on the pixel BPP. There are ways to play with the masks and shifts
for 32bpp to work but 15/16 is always busted when the green gets split,
X really can't deal with it.

But that's the tip of the iceberg. Those assumptions about pixel formats
percolate all the way through the gfx stack, it's a real mess. For GL,
for example, an ARGB format is not A,R,G,B in memory in that order but
"ARGB" from MSB to LSB in the smallest word loaded that can encompass
the pixel size (Funnily enough, I heard Direct X got that right !) and
layers of code exist out there that just can't deal with "the other
way".

This is why graphics card have historically carried all sort of weird
byte swapping hardware, more or less working, trying to solve that at
the HW level because SW is basically a trainwreck :-)

Now, Egbert is still trying to make base X and qt5 at least somewhat
work with the reverse order at least for 32bpp, but that's mostly an
academic exercise at this stage.

So yes, you are absolutely right for the general case. But graphics
sadly has to be the exception to the rule.

Cheers,
Ben.

> thanks
> -- PMM





reply via email to

[Prev in Thread] Current Thread [Next in Thread]