gnash-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gnash-dev] Flash HD (H.264) video decoding acceleration


From: John Gilmore
Subject: Re: [Gnash-dev] Flash HD (H.264) video decoding acceleration
Date: Wed, 23 Sep 2009 20:45:21 -0700

> I am not sure to fully understand what you mean but I will think again  
> about it tomorrow or over this night. Isn't Gnash/Flash limited to VP6  
> and H.264? How would MPEG-2 MoComp/iDCT support help Gnash?

All existing video codecs (except Dirac) work basically the same way.
They use the same building blocks inside.  VP6 and H.264 have motion
compensation and do iDCT.  See:

  http://www.dspdesignline.com/211100053?printableArticle=true

> I had asked some Intel people about other formats. The main reason was  
> only broadly accepted and standard formats are available on silicon.  
> Even VP6 did not match their criteria. Besides, the people who  
> designed the API worked with existing HW implementations, not with  
> hypothetical and future implementations.

This is another red flag.

It's typical of the "PC industry" to keep designing products that look
backwards, not forwards.  Then they are surprised (!) when after half
a generation they need yet a new interface.  Obvious examples are the
buses that only supported 640K of RAM (PC-AT), or 4GB of flash (SD),
8-character file names, the list goes on and on and on and on and on.
They create self-fulfilling prophecies of obsolesence.  No wonder their
hardware is full of backward-looking compatibility kludges.

Let me guess, nobody is ever going to invent a codec again.  And no
codec will ever become popular except the ones that Intel(TM) chips
implement in 2009.  And that's why this API is not extensible.  Right?
Right!

If the VA API isn't designed to work with the next generation of video
hardware, what's the point of rewriting all our software to use it?  A
well designed protocol would work with the next TWO or THREE
generations of hardware.  The Linux community is still working with
clean APIs that were designed in the 1970s (open/close/read/write,
fork/exec, etc) and 1980s (socket/connect/bind).  TCP/IP was also
designed in the 1970s, as was Ethernet.  *IT STILL WORKS!* All of
these were designed outside the shortsighted "PC industry".  Learn
from real standards with real longevity, not from the people at Intel
who can't see beyond their own noses.

If the free community can't implement chips that accelerate OUR OWN
protocols and standards, and call them using Intel's protocol, why
should we bother to use this protocol rather than making our own?
Designing chips isn't rocket science.  High school students do it.
Motherboards come with accelerator sockets (at least in AMD motherboards).
And with programmable rather than hardcoded accelerators, which are an
obvious trend even today, anyone could write microcode to accelerate
any video format.  "Oh, but there's no way to tell the API that we
accelerated that -- so let's not bother."  Wrong.

The VA API makes callers pick a "profile", which is what video format
they're working with, and an "entrypoint", which is how much of the
work will get done by software versus hardware.  It should be possible
to tell the VA API that you're using *ANY* video format.  These
formats should be specified by character strings, not by short binary
numbers.  If there's no driver for that video format, the API already
has clean ways to tell you there's no driver.  What it doesn't have
clean ways to do is to make a new driver for a new format -- nor, for
an application which knows the name of the video format it wants to
play, to figure out the short binary number for that format.  Let me
guess: every application will need to make a stupid little table that
maps video formats to VA API's stupid little numbers?

I would even make a profile whose arguments are "I have this video
file and here's the first 8 kbytes of it -- please tell me if you can
play it, and if so, set up the right profile."  Without knowing the
NAME of the format!  Most applications, like gnash, have no control
over the format of the video they'll be asked to play.

So fix that.  In the API.  They've asked for community input, they
want it to evolve.

The same with "entrypoints".  A "bitstream" entrypoint, i.e.  feed the
whole movie to hardware, should be standardized.  The other entrypoint
strings should be specific to the codecs involved, though "motion
compensation" is clearly one of them, and "inverse discrete cosine
transform" is another.

Clue, meet Intel.  Intel, meet clue.  Hulk smash clue into Intel head.
Run, clue, run!

        John




reply via email to

[Prev in Thread] Current Thread [Next in Thread]