[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Gnash-commit] gnash ChangeLog doc/C/internals.xml
From: |
Tomas Groth |
Subject: |
[Gnash-commit] gnash ChangeLog doc/C/internals.xml |
Date: |
Mon, 14 Aug 2006 16:25:15 +0000 |
CVSROOT: /sources/gnash
Module name: gnash
Changes by: Tomas Groth <tgc> 06/08/14 16:25:15
Modified files:
. : ChangeLog
doc/C : internals.xml
Log message:
Added info about how soundhandlers works.
CVSWeb URLs:
http://cvs.savannah.gnu.org/viewcvs/gnash/ChangeLog?cvsroot=gnash&r1=1.637&r2=1.638
http://cvs.savannah.gnu.org/viewcvs/gnash/doc/C/internals.xml?cvsroot=gnash&r1=1.20&r2=1.21
Patches:
Index: ChangeLog
===================================================================
RCS file: /sources/gnash/gnash/ChangeLog,v
retrieving revision 1.637
retrieving revision 1.638
diff -u -b -r1.637 -r1.638
--- ChangeLog 13 Aug 2006 16:45:11 -0000 1.637
+++ ChangeLog 14 Aug 2006 16:25:14 -0000 1.638
@@ -1,3 +1,7 @@
+2006-08-14 Tomas Groth Christensen <address@hidden>
+
+ * doc/C/internals.xml: Added info about how the soundhandlers works.
+
2006-08-13 Sandro Santilli <address@hidden>
* server/edit_text_character_def.cpp, server/shape_character_def.cpp,
Index: doc/C/internals.xml
===================================================================
RCS file: /sources/gnash/gnash/doc/C/internals.xml,v
retrieving revision 1.20
retrieving revision 1.21
diff -u -b -r1.20 -r1.21
--- doc/C/internals.xml 21 Jun 2006 00:34:37 -0000 1.20
+++ doc/C/internals.xml 14 Aug 2006 16:25:15 -0000 1.21
@@ -1718,6 +1718,164 @@
</sect2>
+ <sect2 id="soundhandlers">
+ <title>Soundhandling in Gnash</title>
+
+ <para>
+ When a SWF-files being played in Gnash contains audio Gnash uses its
+ soundshandlers to play it. At the moment there is 2 soundhandlers, but it
+ is likely that there will come more.
+ </para>
+
+ <sect3 id="soundtypes">
+ <title>Soundtypes</title>
+ <para>
+ Sounds can be devided into two groups: event-sounds and soundstreams.
+ Event-sounds are contained in a single SWF frame, but the playtime can
+ span multiple frames. Soundstreams can be (and normally is) divided
+ over the SWF frames the soundstreams spans. This means that if a
+ gotoframe goes to a frame which contains data for a soundstream,
+ playback of the stream can be picked up from there.
+ </para>
+ </sect3>
+
+ <sect3 id="soundparsing">
+ <title>Sound parsing</title>
+ <para>
+ When Gnash parses a SWF-file, it hands over the sounds to the
+ soundhandler. Since the event-sounds are contained in one frame, the
+ entire event-sound is retrieved at once, while a soundstream maybe not
+ be completely retrived before the entire SWF-file has been parsed. But
+ since the entire soundstream doesn't need to be present when playback
+ start, it is not nessesary to wait.
+ </para>
+ </sect3>
+
+ <sect3 id="soundplayback">
+ <title>Sound playback</title>
+ <para>
+ When Gnash plays a SWF-file and a sound is to be played it calls the
+ soundhandler, which starts to play the sound and return. All the
+ playing is done by threads (in both SDL_mixer and Gstreamer), so once
+ started the audio and graphics is not sync'ed with each other, which
+ means that we have to trust both the graphic renderer and the audio
+ backend to play at correct speed.
+ </para>
+ </sect3>
+
+ <sect3 id="sdl_mixer">
+ <title>The SDL_mixer backend</title>
+ <para>
+ The SDL_mixer only support event-sounds, and it is not very likely that
+ it will ever support soundstream. When receiving an event-sound it
+ decodes it at once, using either an internal ADPCM decoder, or the
+ madlib mp3-decoder, and then stores the audio-output in a raw format
+ readable by SDL_mixer. When playing a sound all the raw data is
+ transfered to SDL_mixer which plays it. The advantage with the
+ SDL_mixer backend is that it gives instant playback when asked (no
+ decoding delay, and very little setup delay). The drawbacks are that it
+ doesn't support soundstreams, and that it decodes everything when
+ parsing, which means that there can be a considerable pause when
+ decoding large audio blocks.
+ </para>
+ </sect3>
+
+ <sect3 id="gstreamer">
+ <title>The Gstreamer backend</title>
+ <para>
+ The Gstreamer backend, though not complete, supports both soundstreams
+ and event-sounds. When receiving sounddata it stores it uncompressed,
+ though it does decode ADPCM event sounds in the same manner that the
+ SDL_mixer backend does. When the playback starts, the backend setups a
+ Gstreamer bin containing a decoder (and other things needed) and places
+ it in a Gstreamer pipeline, which plays the audio. All the sounddata is
+ not passed at once, but in small chuncks, and via callbacks the
+ pipeline gets fed. The advantages of the Gstreamer backend is that it
+ supports both kind of sounds, it avoids all the legal mp3-stuff, and it
+ should be relativly easy to add VORBIS support. The drawbacks are that
+ it has longer "reply delay" when starting the playback of a sound, and
+ it suffers under some bugs in Gstreamer that are yet to be fixed.
+ </para>
+ </sect3>
+
+ <sect3 id="audio-future">
+ <title>Future audio backends</title>
+ <para>
+ It would probably be desirable to make more backends in the future,
+ either because other and better backend systems are brought to our
+ attention, or perhaps because an internal soundhandling is better
+ suited for embedded platform with limited software installed.
+ </para>
+ </sect3>
+
+ <sect3 id="gstreamer-details">
+ <title>Detailed desciption of the Gstreamer backend</title>
+ <para>
+ Gstreamer works with pipelines, bins and elements. Pipelines are the
+ main bin, where all other bins or elements are places. Visually the
+ audio pipeline in gnash looks like this:
+ </para>
+
+ <programlisting>
+ ___
+ |Bin|_
+ |___| \
+ ___ \ _____ ____________
+ |Bin|___|Adder|_____|Audio output|
+ |___| |_____| |____________|
+ ___ /
+ |Bin|_/
+ |___|
+
+ </programlisting>
+
+ <para>
+ There is one bin for each sound which is being played. If a sound is
+ played more the once at the same time, multiple bins will be made. The
+ bins contains:
+ </para>
+
+ <programlisting>
+
+
|source|---|capsfilter|---|decoder|---|aconverter|---|aresampler|---|volume|
+
+ </programlisting>
+
+ <para>
+ In the source element we place parts of the undecodede sounddata, and
+ when playing the pipeline will pull the data from the element. Via
+ callbacks it is refilled if needed. In the capsfilter the data is
+ labeled with the format of the data. The decoder (surprise!) decodes
+ the data. The audioconverter converts the now raw sounddata into a
+ format accepted by the adder, all input to the adder must in the same
+ format. The audioresampler resamples the raw sounddata into a sample
+ accepted by the adder, all input to the adder must in the same
+ samplerate. The volume element makes it possible to control the volume
+ of each sound.
+ </para>
+
+ <para>
+ When a sound is done being played it emits a End-Of-Stream-signal
+ (EOS), which is caught by an event-handler-callback, which then makes
+ sure that the bin in question is removed from the pipeline. When a
+ sound is told by gnash to stop playback before it has ended playback,
+ we do something (not yet finally implemented), which makes the bin emit
+ an EOS, and the event-handler-callback will remove the sound from the
+ pipeline. Unfortunantly Gstreamer has a curent bug which causes the
+ entire pipeline to stop playing when unlinking an element from the
+ pipeline, so far no fix is known.
+ </para>
+
+ <para>
+ Gstreamer also contains a bug concerning linking multiple elements to
+ the adder in rapid succesion, which causes to adder to "die" and stop
+ the playback.
+ </para>
+ </sect3>
+
+
+ </sect2>
+
<sect2 id="testing">
<title>Testing Support</title>
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- [Gnash-commit] gnash ChangeLog doc/C/internals.xml,
Tomas Groth <=