[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Gnash-commit] gnash ChangeLog libmedia/AudioDecoder.h libmedi...
From: |
Sandro Santilli |
Subject: |
[Gnash-commit] gnash ChangeLog libmedia/AudioDecoder.h libmedi... |
Date: |
Tue, 03 Jun 2008 16:11:45 +0000 |
CVSROOT: /sources/gnash
Module name: gnash
Changes by: Sandro Santilli <strk> 08/06/03 16:11:45
Modified files:
. : ChangeLog
libmedia : AudioDecoder.h MediaHandler.h
libmedia/ffmpeg: MediaHandlerFfmpeg.cpp MediaHandlerFfmpeg.h
libmedia/gst : MediaHandlerGst.cpp MediaHandlerGst.h
server/asobj : NetStreamFfmpeg.cpp NetStreamFfmpeg.h
server/parser : video_stream_def.cpp
Log message:
* libmedia/MediaHandler.h: add createAudioDecoder, taking AudioInfo
as input, change createVideoDecoder to take VideoInfo for
consistency.
* libmedia/AudioDecoder.h: minor comment for interface cleanup
* libmedia/ffmpeg/MediaHandlerFfmpeg.{cpp,h}: implement
createAudioDecoder, fix createVideoDecoder.
* libmedia/gst/MediaHandlerGst.{cpp,h}: implement createAudioDecoder,
fix createVideoDecoder.
* server/asobj/NetStreamFfmpeg.{cpp,h}: drop all decoding code,
rely solely on the MediaHandler for that.
* server/parser/video_stream_def.cpp: update calls to
createVideoDecoder now that a VideoInfo is required.
CVSWeb URLs:
http://cvs.savannah.gnu.org/viewcvs/gnash/ChangeLog?cvsroot=gnash&r1=1.6791&r2=1.6792
http://cvs.savannah.gnu.org/viewcvs/gnash/libmedia/AudioDecoder.h?cvsroot=gnash&r1=1.11&r2=1.12
http://cvs.savannah.gnu.org/viewcvs/gnash/libmedia/MediaHandler.h?cvsroot=gnash&r1=1.1&r2=1.2
http://cvs.savannah.gnu.org/viewcvs/gnash/libmedia/ffmpeg/MediaHandlerFfmpeg.cpp?cvsroot=gnash&r1=1.1&r2=1.2
http://cvs.savannah.gnu.org/viewcvs/gnash/libmedia/ffmpeg/MediaHandlerFfmpeg.h?cvsroot=gnash&r1=1.1&r2=1.2
http://cvs.savannah.gnu.org/viewcvs/gnash/libmedia/gst/MediaHandlerGst.cpp?cvsroot=gnash&r1=1.1&r2=1.2
http://cvs.savannah.gnu.org/viewcvs/gnash/libmedia/gst/MediaHandlerGst.h?cvsroot=gnash&r1=1.1&r2=1.2
http://cvs.savannah.gnu.org/viewcvs/gnash/server/asobj/NetStreamFfmpeg.cpp?cvsroot=gnash&r1=1.139&r2=1.140
http://cvs.savannah.gnu.org/viewcvs/gnash/server/asobj/NetStreamFfmpeg.h?cvsroot=gnash&r1=1.71&r2=1.72
http://cvs.savannah.gnu.org/viewcvs/gnash/server/parser/video_stream_def.cpp?cvsroot=gnash&r1=1.47&r2=1.48
Patches:
Index: ChangeLog
===================================================================
RCS file: /sources/gnash/gnash/ChangeLog,v
retrieving revision 1.6791
retrieving revision 1.6792
diff -u -b -r1.6791 -r1.6792
--- ChangeLog 3 Jun 2008 14:48:51 -0000 1.6791
+++ ChangeLog 3 Jun 2008 16:11:43 -0000 1.6792
@@ -1,5 +1,20 @@
2008-06-03 Sandro Santilli <address@hidden>
+ * libmedia/MediaHandler.h: add createAudioDecoder, taking AudioInfo
+ as input, change createVideoDecoder to take VideoInfo for
+ consistency.
+ * libmedia/AudioDecoder.h: minor comment for interface cleanup
+ * libmedia/ffmpeg/MediaHandlerFfmpeg.{cpp,h}: implement
+ createAudioDecoder, fix createVideoDecoder.
+ * libmedia/gst/MediaHandlerGst.{cpp,h}: implement createAudioDecoder,
+ fix createVideoDecoder.
+ * server/asobj/NetStreamFfmpeg.{cpp,h}: drop all decoding code,
+ rely solely on the MediaHandler for that.
+ * server/parser/video_stream_def.cpp: update calls to
+ createVideoDecoder now that a VideoInfo is required.
+
+2008-06-03 Sandro Santilli <address@hidden>
+
* libmedia/MediaHandler.{cpp,h},
libmedia/gst/MediaHandlerGst.{cpp,h},
libmedia/ffmpeg/MediaHandlerFfmpeg.{cpp,h}
Index: libmedia/AudioDecoder.h
===================================================================
RCS file: /sources/gnash/gnash/libmedia/AudioDecoder.h,v
retrieving revision 1.11
retrieving revision 1.12
diff -u -b -r1.11 -r1.12
--- libmedia/AudioDecoder.h 3 Jun 2008 12:39:52 -0000 1.11
+++ libmedia/AudioDecoder.h 3 Jun 2008 16:11:44 -0000 1.12
@@ -50,6 +50,8 @@
///
/// @return true if succesfull else false
///
+ /// TODO: take AudioInfo by ref, not pointer
+ ///
virtual bool setup(AudioInfo* /*info*/) { return false; }
/// Sets up the decoder.
Index: libmedia/MediaHandler.h
===================================================================
RCS file: /sources/gnash/gnash/libmedia/MediaHandler.h,v
retrieving revision 1.1
retrieving revision 1.2
diff -u -b -r1.1 -r1.2
--- libmedia/MediaHandler.h 3 Jun 2008 14:48:53 -0000 1.1
+++ libmedia/MediaHandler.h 3 Jun 2008 16:11:44 -0000 1.2
@@ -33,6 +33,9 @@
namespace gnash {
namespace media {
class VideoDecoder;
+ class AudioDecoder;
+ class AudioInfo;
+ class VideoInfo;
}
}
@@ -83,8 +86,15 @@
///
/// @return 0 if no decoder could be created for the specified encoding
///
- virtual std::auto_ptr<VideoDecoder> createVideoDecoder(
- videoCodecType format, int width, int height)=0;
+ virtual std::auto_ptr<VideoDecoder> createVideoDecoder(VideoInfo&
info)=0;
+
+ /// Create an AudioDecoder for decoding what's specified in the
AudioInfo
+ //
+ /// @param info
+ /// AudioInfo class with all the info needed to decode
+ /// the sound correctly.
+ ///
+ virtual std::auto_ptr<AudioDecoder> createAudioDecoder(AudioInfo&
info)=0;
protected:
Index: libmedia/ffmpeg/MediaHandlerFfmpeg.cpp
===================================================================
RCS file: /sources/gnash/gnash/libmedia/ffmpeg/MediaHandlerFfmpeg.cpp,v
retrieving revision 1.1
retrieving revision 1.2
diff -u -b -r1.1 -r1.2
--- libmedia/ffmpeg/MediaHandlerFfmpeg.cpp 3 Jun 2008 14:48:54 -0000
1.1
+++ libmedia/ffmpeg/MediaHandlerFfmpeg.cpp 3 Jun 2008 16:11:44 -0000
1.2
@@ -20,6 +20,7 @@
#include "MediaHandlerFfmpeg.h"
#include "VideoDecoderFfmpeg.h"
+#include "AudioDecoderFfmpeg.h"
#include "tu_file.h" // for visibility of destructor
#include "MediaParser.h" // for visibility of destructor
@@ -35,11 +36,27 @@
}
std::auto_ptr<VideoDecoder>
-MediaHandlerFfmpeg::createVideoDecoder(videoCodecType format, int width, int
height)
+MediaHandlerFfmpeg::createVideoDecoder(VideoInfo& info)
{
+ if ( info.type != FLASH )
+ {
+ log_error("Non-flash video encoding not supported yet by FFMPEG
VideoDecoder");
+ return std::auto_ptr<VideoDecoder>(0);
+ }
+ videoCodecType format = static_cast<videoCodecType>(info.codec);
+ int width = info.width;
+ int height = info.height;
std::auto_ptr<VideoDecoder> ret( new VideoDecoderFfmpeg(format, width,
height) );
return ret;
}
+std::auto_ptr<AudioDecoder>
+MediaHandlerFfmpeg::createAudioDecoder(AudioInfo& info)
+{
+ std::auto_ptr<AudioDecoder> ret( new AudioDecoderFfmpeg() );
+ ret->setup(&info);
+ return ret;
+}
+
} // gnash.media namespace
} // gnash namespace
Index: libmedia/ffmpeg/MediaHandlerFfmpeg.h
===================================================================
RCS file: /sources/gnash/gnash/libmedia/ffmpeg/MediaHandlerFfmpeg.h,v
retrieving revision 1.1
retrieving revision 1.2
diff -u -b -r1.1 -r1.2
--- libmedia/ffmpeg/MediaHandlerFfmpeg.h 3 Jun 2008 14:48:54 -0000
1.1
+++ libmedia/ffmpeg/MediaHandlerFfmpeg.h 3 Jun 2008 16:11:44 -0000
1.2
@@ -37,7 +37,9 @@
virtual std::auto_ptr<MediaParser>
createMediaParser(std::auto_ptr<tu_file> stream);
- virtual std::auto_ptr<VideoDecoder> createVideoDecoder(videoCodecType
format, int width, int height);
+ virtual std::auto_ptr<VideoDecoder> createVideoDecoder(VideoInfo& info);
+
+ virtual std::auto_ptr<AudioDecoder> createAudioDecoder(AudioInfo& info);
};
Index: libmedia/gst/MediaHandlerGst.cpp
===================================================================
RCS file: /sources/gnash/gnash/libmedia/gst/MediaHandlerGst.cpp,v
retrieving revision 1.1
retrieving revision 1.2
diff -u -b -r1.1 -r1.2
--- libmedia/gst/MediaHandlerGst.cpp 3 Jun 2008 14:48:54 -0000 1.1
+++ libmedia/gst/MediaHandlerGst.cpp 3 Jun 2008 16:11:45 -0000 1.2
@@ -20,6 +20,7 @@
#include "MediaHandlerGst.h"
#include "VideoDecoderGst.h"
+#include "AudioDecoderGst.h"
#include "tu_file.h" // for visibility of destructor
#include "MediaParser.h" // for visibility of destructor
@@ -35,11 +36,28 @@
}
std::auto_ptr<VideoDecoder>
-MediaHandlerGst::createVideoDecoder(videoCodecType format, int width, int
height)
+MediaHandlerGst::createVideoDecoder(VideoInfo& info)
{
+ if ( info.type != FLASH )
+ {
+ log_error("Non-flash video encoding not supported yet by GST
VideoDecoder");
+ return std::auto_ptr<VideoDecoder>(0);
+ }
+ videoCodecType format = static_cast<videoCodecType>(info.codec);
+ int width = info.width;
+ int height = info.height;
+
std::auto_ptr<VideoDecoder> ret( new VideoDecoderGst(format, width,
height) );
return ret;
}
+std::auto_ptr<AudioDecoder>
+MediaHandlerGst::createAudioDecoder(AudioInfo& info)
+{
+ std::auto_ptr<AudioDecoder> ret( new AudioDecoderGst() );
+ ret->setup(&info);
+ return ret;
+}
+
} // gnash.media namespace
} // gnash namespace
Index: libmedia/gst/MediaHandlerGst.h
===================================================================
RCS file: /sources/gnash/gnash/libmedia/gst/MediaHandlerGst.h,v
retrieving revision 1.1
retrieving revision 1.2
diff -u -b -r1.1 -r1.2
--- libmedia/gst/MediaHandlerGst.h 3 Jun 2008 14:48:55 -0000 1.1
+++ libmedia/gst/MediaHandlerGst.h 3 Jun 2008 16:11:45 -0000 1.2
@@ -37,7 +37,9 @@
virtual std::auto_ptr<MediaParser>
createMediaParser(std::auto_ptr<tu_file> stream);
- virtual std::auto_ptr<VideoDecoder> createVideoDecoder(videoCodecType
format, int width, int height);
+ virtual std::auto_ptr<VideoDecoder> createVideoDecoder(VideoInfo& info);
+
+ virtual std::auto_ptr<AudioDecoder> createAudioDecoder(AudioInfo& info);
};
Index: server/asobj/NetStreamFfmpeg.cpp
===================================================================
RCS file: /sources/gnash/gnash/server/asobj/NetStreamFfmpeg.cpp,v
retrieving revision 1.139
retrieving revision 1.140
diff -u -b -r1.139 -r1.140
--- server/asobj/NetStreamFfmpeg.cpp 3 Jun 2008 12:39:55 -0000 1.139
+++ server/asobj/NetStreamFfmpeg.cpp 3 Jun 2008 16:11:45 -0000 1.140
@@ -31,11 +31,15 @@
#include "render.h"
#include "movie_root.h"
#include "sound_handler.h"
-#include "VideoDecoderFfmpeg.h"
+
+#include "MediaParser.h"
+#include "VideoDecoder.h"
+#include "AudioDecoder.h"
+#include "MediaHandler.h"
+
#include "SystemClock.h"
#include "gnash.h" // get_sound_handler()
-#include "FLVParser.h"
#include <boost/scoped_array.hpp>
#include <algorithm> // std::min
@@ -76,14 +80,6 @@
_decoding_state(DEC_NONE),
- m_video_index(-1),
- m_audio_index(-1),
-
- m_VCodecCtx(NULL),
- m_ACodecCtx(NULL),
- m_FormatCtx(NULL),
- m_Frame(NULL),
-
#ifdef LOAD_MEDIA_IN_A_SEPARATE_THREAD
_parserThread(NULL),
_parserThreadBarrier(2), // main and decoder threads
@@ -151,28 +147,9 @@
_soundHandler->detach_aux_streamer(this);
}
- if (m_Frame) av_free(m_Frame);
- m_Frame = NULL;
-
- if ( m_VCodecCtx ) {
- avcodec_close( m_VCodecCtx );
- }
- m_VCodecCtx = NULL;
-
- if ( m_ACodecCtx ) {
- avcodec_close( m_ACodecCtx );
- }
- m_ACodecCtx = NULL;
-
- if (m_FormatCtx)
- {
- m_FormatCtx->iformat->flags = AVFMT_NOFILE;
- av_close_input_file(m_FormatCtx);
- m_FormatCtx = NULL;
- }
-
delete m_imageframe;
m_imageframe = NULL;
+
delete m_unqueued_data;
m_unqueued_data = NULL;
@@ -300,114 +277,42 @@
return;
}
-/// Finds a decoder, allocates a context and initializes it.
-//
-/// @param codec_id the codec ID to find
-/// @return the initialized context, or NULL on failure. The caller is
-/// responsible for deallocating!
-///
-/// TODO: drop, let VideoDecoder/AudioDecoder do this !
-///
-static AVCodecContext*
-initContext(enum CodecID codec_id)
-{
-
- AVCodec* codec = avcodec_find_decoder(codec_id);
- if (!codec)
- {
- log_error(_("libavcodec couldn't find decoder"));
- return NULL;
- }
-
- AVCodecContext * context = avcodec_alloc_context();
- if (!context)
- {
- log_error(_("libavcodec couldn't allocate context"));
- return NULL;
- }
-
- int rv = avcodec_open(context, codec);
- if (rv < 0)
- {
- avcodec_close(context);
- log_error(_("libavcodec failed to initialize codec"));
- return NULL;
- }
-
- return context;
-}
-
-/// Gets video info from the parser and initializes the codec.
-//
-/// @param parser the parser to use to get video information.
-/// @return the initialized context, or NULL on failure. The caller
-/// is responsible for deallocating this pointer.
-static AVCodecContext*
-initVideoDecoder(media::MediaParser& parser)
+void
+NetStreamFfmpeg::initVideoDecoder(media::MediaParser& parser)
{
// Get video info from the parser
media::VideoInfo* videoInfo = parser.getVideoInfo();
- if (!videoInfo)
- {
- return NULL;
+ if (!videoInfo) {
+ log_debug("No video in NetStream stream");
+ return;
}
- enum CodecID codec_id;
+ media::MediaHandler* mh = media::MediaHandler::get();
+ assert ( mh ); // caller should check this
- // Find the decoder and init the parser
- switch(videoInfo->codec)
- {
- case media::VIDEO_CODEC_H263:
- codec_id = CODEC_ID_FLV1;
- break;
-#ifdef FFMPEG_VP6
- case media::VIDEO_CODEC_VP6:
- codec_id = CODEC_ID_VP6F;
- break;
-#endif
- case media::VIDEO_CODEC_SCREENVIDEO:
- codec_id = CODEC_ID_FLASHSV;
- break;
- default:
- log_error(_("Unsupported video codec %d"), (int)
videoInfo->codec);
- return NULL;
- }
-
- return initContext(codec_id);
+ _videoDecoder = mh->createVideoDecoder(*videoInfo);
+ if ( ! _videoDecoder.get() )
+ log_error(_("Could not create video decoder for codec %d"),
videoInfo->codec);
}
-/// Like initVideoDecoder, but for audio.
-static AVCodecContext*
-initAudioDecoder(media::MediaParser& parser)
+/* private */
+void
+NetStreamFfmpeg::initAudioDecoder(media::MediaParser& parser)
{
// Get audio info from the parser
media::AudioInfo* audioInfo = parser.getAudioInfo();
- if (!audioInfo)
- {
- log_debug("No audio in FLV stream");
- return NULL;
+ if (!audioInfo) {
+ log_debug("No audio in NetStream input");
+ return;
}
- enum CodecID codec_id;
-
- switch(audioInfo->codec)
- {
- case media::AUDIO_CODEC_RAW:
- codec_id = CODEC_ID_PCM_U16LE;
- break;
- case media::AUDIO_CODEC_ADPCM:
- codec_id = CODEC_ID_ADPCM_SWF;
- break;
- case media::AUDIO_CODEC_MP3:
- codec_id = CODEC_ID_MP3;
- break;
- default:
- log_error(_("Unsupported audio codec %d"),
(int)audioInfo->codec);
- return NULL;
- }
+ media::MediaHandler* mh = media::MediaHandler::get();
+ assert ( mh ); // caller should check this
- return initContext(codec_id);
+ _audioDecoder = mh->createAudioDecoder(*audioInfo);
+ if ( ! _audioDecoder.get() )
+ log_error(_("Could not create audio decoder for codec %d"),
audioInfo->codec);
}
@@ -444,199 +349,28 @@
inputPos = 0;
- // Check if the file is a FLV, in which case we use our own parser
- char head[4] = {0, 0, 0, 0};
- if (_inputStream->read_bytes(head, 3) < 3)
+ media::MediaHandler* mh = media::MediaHandler::get();
+ if ( ! mh )
{
- log_error(_("Could not read 3 bytes from NetStream input"));
- // not really correct, the stream was found, just wasn't what
we expected..
- setStatus(streamNotFound);
+ LOG_ONCE( log_error(_("No Media handler registered, can't "
+ "parse NetStream input")) );
return false;
}
+ m_parser = mh->createMediaParser(_inputStream);
+ assert(!_inputStream.get());
- //
- // TODO: let all of this be handled by a MediaParserFactory
- // (ie: inspecting type of input)
- //
-
- _inputStream->set_position(0);
- if (std::string(head) == "FLV")
- {
- m_isFLV = true;
- assert ( !m_parser.get() );
-
- m_parser.reset( new media::FLVParser(_inputStream) );
- assert(! _inputStream.get() ); // TODO: when ownership will be
transferred...
-
- if (! m_parser.get() )
+ if ( ! m_parser.get() )
{
- log_error(_("Gnash could not open FLV movie: %s"),
url.c_str());
- // not really correct, the stream was found, just
wasn't what we expected..
+ log_error(_("Unable to create parser for NetStream input"));
+ // not necessarely correct, the stream might have been found...
setStatus(streamNotFound);
return false;
}
- // Init the avdecoder-decoder
- avcodec_init();
- avcodec_register_all();
-
- m_VCodecCtx = initVideoDecoder(*m_parser); // TODO: let
VideoDecoder do this !
- if (!m_VCodecCtx)
- {
- log_error(_("Failed to initialize FLV video codec"));
- return false;
- }
+ initVideoDecoder(*m_parser);
+ initAudioDecoder(*m_parser);
- m_ACodecCtx = initAudioDecoder(*m_parser); // TODO: let
AudioDecoder do this !
- if (!m_ACodecCtx)
- {
- // There might simply be no audio, no problem...
- //log_error(_("Failed to initialize FLV audio codec"));
- //return false;
- }
-
- // We just define the indexes here, they're not really used when
- // the file format is FLV
- m_video_index = 0;
- m_audio_index = 1;
-
- // Allocate a frame to store the decoded frame in
- m_Frame = avcodec_alloc_frame();
- }
- else
- {
-
- // This registers all available file formats and codecs
- // with the library so they will be used automatically when
- // a file with the corresponding format/codec is opened
- // XXX should we call avcodec_init() first?
- av_register_all();
-
- AVInputFormat* inputFmt = probeStream(this);
- if (!inputFmt)
- {
- log_error(_("Couldn't determine stream input format
from URL %s"), url.c_str());
- return false;
- }
-
- // After the format probe, reset to the beginning of the file.
- // TODO: have this done by probeStream !
- // (actually, have the whole thing done by MediaParser)
- _inputStream->set_position(0);
-
- // Setup the filereader/seeker mechanism. 7th argument (NULL)
is the writer function,
- // which isn't needed.
- init_put_byte(&ByteIOCxt, new boost::uint8_t[500000], 500000,
0, this, NetStreamFfmpeg::readPacket, NULL, NetStreamFfmpeg::seekMedia);
- ByteIOCxt.is_streamed = 1;
-
- m_FormatCtx = av_alloc_format_context();
-
- // Open the stream. the 4th argument is the filename, which we
ignore.
- if(av_open_input_stream(&m_FormatCtx, &ByteIOCxt, "", inputFmt,
NULL) < 0)
- {
- log_error(_("Couldn't open file '%s' for decoding"),
url.c_str());
- setStatus(streamNotFound);
- return false;
- }
-
- // Next, we need to retrieve information about the streams
contained in the file
- // This fills the streams field of the AVFormatContext with
valid information
- int ret = av_find_stream_info(m_FormatCtx);
- if (ret < 0)
- {
- log_error(_("Couldn't find stream information from
'%s', error code: %d"), url.c_str(), ret);
- return false;
- }
-
- // m_FormatCtx->pb.eof_reached = 0;
- // av_read_play(m_FormatCtx);
-
- // Find the first video & audio stream
- m_video_index = -1;
- m_audio_index = -1;
- //assert(m_FormatCtx->nb_streams >= 0); useless assert.
- for (unsigned int i = 0; i < (unsigned)m_FormatCtx->nb_streams;
i++)
- {
- AVCodecContext* enc = m_FormatCtx->streams[i]->codec;
-
- switch (enc->codec_type)
- {
- case CODEC_TYPE_AUDIO:
- if (m_audio_index < 0)
- {
- m_audio_index = i;
- m_audio_stream =
m_FormatCtx->streams[i];
- }
- break;
-
- case CODEC_TYPE_VIDEO:
- if (m_video_index < 0)
- {
- m_video_index = i;
- m_video_stream =
m_FormatCtx->streams[i];
- }
- break;
- default:
- break;
- }
- }
-
- if (m_video_index < 0)
- {
- log_error(_("Didn't find a video stream from '%s'"),
url.c_str());
- return false;
- }
-
- // Get a pointer to the codec context for the video stream
- m_VCodecCtx = m_FormatCtx->streams[m_video_index]->codec;
-
- // Find the decoder for the video stream
- AVCodec* pCodec = avcodec_find_decoder(m_VCodecCtx->codec_id);
- if (pCodec == NULL)
- {
- m_VCodecCtx = NULL;
- log_error(_("Video decoder %d not found"),
- m_VCodecCtx->codec_id);
- return false;
- }
-
- // Open codec
- if (avcodec_open(m_VCodecCtx, pCodec) < 0)
- {
- log_error(_("Could not open codec %d"),
- m_VCodecCtx->codec_id);
- }
-
- // Allocate a frame to store the decoded frame in
- m_Frame = avcodec_alloc_frame();
-
- if ( m_audio_index >= 0 && _soundHandler )
- {
- // Get a pointer to the audio codec context for the
video stream
- m_ACodecCtx =
m_FormatCtx->streams[m_audio_index]->codec;
-
- // Find the decoder for the audio stream
- AVCodec* pACodec =
avcodec_find_decoder(m_ACodecCtx->codec_id);
- if (pACodec == NULL)
- {
- log_error(_("No available audio decoder %d to
process MPEG file: '%s'"),
- m_ACodecCtx->codec_id, url.c_str());
- return false;
- }
-
- // Open codec
- if (avcodec_open(m_ACodecCtx, pACodec) < 0)
- {
- log_error(_("Could not open audio codec %d for
%s"),
- m_ACodecCtx->codec_id, url.c_str());
- return false;
- }
-
- }
- }
-
- //_playHead.init(m_VCodecCtx!=0, false); // second arg should be
m_ACodecCtx!=0, but we're testing video only for now
- _playHead.init(m_VCodecCtx!=0, m_ACodecCtx!=0);
+ _playHead.init(_videoDecoder.get(), _audioDecoder.get());
_playHead.setState(PlayHead::PLAY_PLAYING);
decodingStatus(DEC_BUFFERING);
@@ -770,14 +504,18 @@
return true;
}
-media::raw_mediadata_t*
+std::auto_ptr<image::rgb>
NetStreamFfmpeg::getDecodedVideoFrame(boost::uint32_t ts)
{
+ assert(_videoDecoder.get()); // caller should check this
+
+ std::auto_ptr<image::rgb> video;
+
assert(m_parser.get());
if ( ! m_parser.get() )
{
log_error("getDecodedVideoFrame: no parser available");
- return 0; // no parser, no party
+ return video; // no parser, no party
}
boost::uint64_t nextTimestamp;
@@ -787,7 +525,7 @@
log_debug("getDecodedVideoFrame(%d): no more video frames in
input (nextVideoFrameTimestamp returned false)");
#endif // GNASH_DEBUG_DECODING
decodingStatus(DEC_STOPPED);
- return 0;
+ return video;
}
if ( nextTimestamp > ts )
@@ -796,15 +534,14 @@
log_debug("%p.getDecodedVideoFrame(%d): next video frame is in
the future (%d)",
this, ts, nextTimestamp);
#endif // GNASH_DEBUG_DECODING
- return 0; // next frame is in the future
+ return video; // next frame is in the future
}
// Loop until a good frame is found
- media::raw_mediadata_t* video = 0;
while ( 1 )
{
video = decodeNextVideoFrame();
- if ( ! video )
+ if ( ! video.get() )
{
log_error("nextVideoFrameTimestamp returned true, "
"but decodeNextVideoFrame returned null, "
@@ -837,13 +574,15 @@
return video;
}
-media::raw_mediadata_t*
+std::auto_ptr<image::rgb>
NetStreamFfmpeg::decodeNextVideoFrame()
{
+ std::auto_ptr<image::rgb> video;
+
if ( ! m_parser.get() )
{
log_error("decodeNextVideoFrame: no parser available");
- return 0; // no parser, no party
+ return video; // no parser, no party
}
std::auto_ptr<media::EncodedVideoFrame> frame =
m_parser->nextVideoFrame();
@@ -854,20 +593,21 @@
"no more video frames in input",
this);
#endif // GNASH_DEBUG_DECODING
- return 0;
+ return video;
}
- AVPacket packet;
+ assert( _videoDecoder.get() ); // caller should check this
+ assert( ! _videoDecoder->peek() ); // everything we push, we'll pop
too..
- packet.destruct = avpacket_destruct; // needed ?
- packet.size = frame->dataSize();
- // ffmpeg insist in requiring non-const AVPacket.data ...
- packet.data = const_cast<boost::uint8_t*>(frame->data());
- // FIXME: is this the right value for packet.dts?
- packet.pts = packet.dts = frame->timestamp();
- packet.stream_index = 0;
+ _videoDecoder->push(*frame);
+ video = _videoDecoder->pop();
+ if ( ! video.get() )
+ {
+ // TODO: tell more about the failure
+ log_error("Error decoding encoded video frame in NetSTream
input");
+ }
- return decodeVideo(&packet);
+ return video;
}
media::raw_mediadata_t*
@@ -886,392 +626,27 @@
return 0;
}
- AVPacket packet;
-
- packet.destruct = avpacket_destruct;
- packet.size = frame->dataSize;
- packet.data = frame->data.get();
- // FIXME: is this the right value for packet.dts?
- packet.pts = packet.dts = frame->timestamp;
- packet.stream_index = 1;
-
- return decodeAudio(&packet);
-}
-
-bool
-NetStreamFfmpeg::decodeFLVFrame()
-{
-#if 1
- abort();
- return false;
-#else
- FLVFrame* frame = m_parser->nextMediaFrame(); // we don't care which
one, do we ?
-
- if (frame == NULL)
- {
- //assert ( _netCon->loadCompleted() );
- //assert ( m_parser->parsingCompleted() );
- decodingStatus(DEC_STOPPED);
- return true;
- }
-
- AVPacket packet;
-
- packet.destruct = avpacket_destruct;
- packet.size = frame->dataSize;
- packet.data = frame->data;
- // FIXME: is this the right value for packet.dts?
- packet.pts = packet.dts = static_cast<boost::int64_t>(frame->timestamp);
-
- if (frame->type == videoFrame)
- {
- packet.stream_index = 0;
- media::raw_mediadata_t* video = decodeVideo(&packet);
- assert (m_isFLV);
- if (video)
- {
- // NOTE: Caller is assumed to have locked _qMutex
already
- if ( ! m_qvideo.push(video) )
- {
- log_error("Video queue full !");
- }
- }
- }
- else
- {
- assert(frame->type == audioFrame);
- packet.stream_index = 1;
- media::raw_mediadata_t* audio = decodeAudio(&packet);
- if ( audio )
- {
- if ( ! m_qaudio.push(audio) )
- {
- log_error("Audio queue full!");
- }
- }
- }
-
- return true;
-#endif
-}
-
-
-media::raw_mediadata_t*
-NetStreamFfmpeg::decodeAudio( AVPacket* packet )
-{
- if (!m_ACodecCtx) return 0;
-
- int frame_size;
- //static const unsigned int bufsize = (AVCODEC_MAX_AUDIO_FRAME_SIZE *
3) / 2;
- static const unsigned int bufsize = AVCODEC_MAX_AUDIO_FRAME_SIZE;
-
- if ( ! _decoderBuffer ) _decoderBuffer = new boost::uint8_t[bufsize];
-
- boost::uint8_t* ptr = _decoderBuffer;
-
-#ifdef FFMPEG_AUDIO2
- frame_size = bufsize; // TODO: is it safe not initializing this ifndef
FFMPEG_AUDIO2 ?
- if (avcodec_decode_audio2(m_ACodecCtx, (boost::int16_t*) ptr,
&frame_size, packet->data, packet->size) >= 0)
-#else
- if (avcodec_decode_audio(m_ACodecCtx, (boost::int16_t*) ptr,
&frame_size, packet->data, packet->size) >= 0)
-#endif
- {
-
- bool stereo = m_ACodecCtx->channels > 1 ? true : false;
- int samples = stereo ? frame_size >> 2 : frame_size >> 1;
-
- if (_resampler.init(m_ACodecCtx))
- {
- // Resampling is needed.
-
- // Compute new size based on frame_size and
- // resampling configuration
- double resampleFactor =
(44100.0/m_ACodecCtx->sample_rate) * (2.0/m_ACodecCtx->channels);
- int resampledFrameSize =
int(ceil(frame_size*resampleFactor));
-
- // Allocate just the required amount of bytes
- boost::uint8_t* output = new
boost::uint8_t[resampledFrameSize];
-
- samples =
_resampler.resample(reinterpret_cast<boost::int16_t*>(ptr),
-
reinterpret_cast<boost::int16_t*>(output),
- samples);
-
- if (resampledFrameSize < samples*2*2)
- {
- log_error(" --- Computation of resampled frame
size (%d) < then the one based on samples (%d)",
- resampledFrameSize, samples*2*2);
-
- log_debug(" input frame size: %d", frame_size);
- log_debug(" input sample rate: %d",
m_ACodecCtx->sample_rate);
- log_debug(" input channels: %d",
m_ACodecCtx->channels);
- log_debug(" input samples: %d", samples);
-
- log_debug(" output sample rate (assuming): %d",
44100);
- log_debug(" output channels (assuming): %d", 2);
- log_debug(" output samples: %d", samples);
-
- abort(); // the call to resample() likely
corrupted memory...
- }
-
- frame_size = samples*2*2;
-
- // ownership of memory pointed-to by 'ptr' will be
- // transferred below
- ptr = reinterpret_cast<boost::uint8_t*>(output);
-
- // we'll reuse _decoderBuffer
- }
- else
- {
- // ownership of memory pointed-to by 'ptr' will be
- // transferred below, so we reset _decoderBuffer here.
- // Doing so, next time we'll need to decode we'll create
- // a new buffer
- _decoderBuffer=0;
- }
-
media::raw_mediadata_t* raw = new media::raw_mediadata_t();
+ boost::uint32_t decodedData=0;
+ bool parseAudio = true; // I don't get this...
+ raw->m_data = _audioDecoder->decode(frame->data.get(), frame->dataSize,
raw->m_size, decodedData, parseAudio);
- raw->m_data = ptr; // ownership of memory pointed by 'ptr'
transferred here
- raw->m_ptr = raw->m_data;
- raw->m_size = frame_size;
- raw->m_stream_index = m_audio_index;
-
- // set presentation timestamp
- if (packet->dts != static_cast<signed long>(AV_NOPTS_VALUE))
- {
- if (!m_isFLV) raw->m_pts =
static_cast<boost::uint32_t>(as_double(m_audio_stream->time_base) * packet->dts
* 1000.0);
- else raw->m_pts =
static_cast<boost::uint32_t>((as_double(m_ACodecCtx->time_base) * packet->dts)
* 1000.0);
- }
-
- if (raw->m_pts != 0)
+ if ( decodedData != frame->dataSize )
{
- // update audio clock with pts, if present
- m_last_audio_timestamp = raw->m_pts;
- }
- else
- {
- raw->m_pts = m_last_audio_timestamp;
+ log_error("FIXME: not all data in EncodedAudioFrame was
decoded, just %d/%d",
+ frame->dataSize, decodedData);
}
- // update video clock for next frame
- boost::uint32_t frame_delay;
- if (!m_isFLV)
- {
- frame_delay =
static_cast<boost::uint32_t>((as_double(m_audio_stream->time_base) *
packet->dts) * 1000.0);
- }
- else
- {
- frame_delay = m_parser->audioFrameDelay();
- }
-
- m_last_audio_timestamp += frame_delay;
+ //raw->m_stream_index = m_audio_index; // no idea what this is needed
for
+ raw->m_ptr = raw->m_data; // no idea what this is needed for
+ raw->m_pts = frame->timestamp;
return raw;
- }
- return 0;
-}
-
-
-media::raw_mediadata_t*
-NetStreamFfmpeg::decodeVideo(AVPacket* packet)
-{
- if (!m_VCodecCtx) return NULL;
- if (!m_Frame) return NULL;
-
- int got = 0;
- avcodec_decode_video(m_VCodecCtx, m_Frame, &got, packet->data,
packet->size);
- if (!got) return NULL;
-
- // This tmpImage is really only used to compute proper size of the
video data...
- // stupid isn't it ?
- std::auto_ptr<image::image_base> tmpImage;
- if (m_videoFrameFormat == render::YUV)
- {
- tmpImage.reset( new image::yuv(m_VCodecCtx->width,
m_VCodecCtx->height) );
- }
- else if (m_videoFrameFormat == render::RGB)
- {
- tmpImage.reset( new image::rgb(m_VCodecCtx->width,
m_VCodecCtx->height) );
- }
-
- AVPicture rgbpicture;
-
- if (m_videoFrameFormat == render::NONE)
- {
- // NullGui?
- return NULL;
-
- }
- else if (m_videoFrameFormat == render::YUV && m_VCodecCtx->pix_fmt !=
PIX_FMT_YUV420P)
- {
- assert( 0 ); // TODO
- //img_convert((AVPicture*) pFrameYUV, PIX_FMT_YUV420P,
(AVPicture*) pFrame, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height);
- // Don't use depreceted img_convert, use sws_scale
-
- }
- else if (m_videoFrameFormat == render::RGB && m_VCodecCtx->pix_fmt !=
PIX_FMT_RGB24)
- {
- rgbpicture =
media::VideoDecoderFfmpeg::convertRGB24(m_VCodecCtx, *m_Frame);
- if (!rgbpicture.data[0])
- {
- return NULL;
- }
- }
-
- media::raw_mediadata_t* video = new media::raw_mediadata_t();
-
- video->m_data = new boost::uint8_t[tmpImage->size()];
- video->m_ptr = video->m_data;
- video->m_stream_index = m_video_index;
- video->m_pts = 0;
-
- // set presentation timestamp
- if (packet->dts != static_cast<signed long>(AV_NOPTS_VALUE))
- {
- if (!m_isFLV) video->m_pts =
static_cast<boost::uint32_t>((as_double(m_video_stream->time_base) *
packet->dts) * 1000.0);
- else video->m_pts =
static_cast<boost::uint32_t>((as_double(m_VCodecCtx->time_base) * packet->dts)
* 1000.0);
- }
-
- if (video->m_pts != 0)
- {
- // update video clock with pts, if present
- m_last_video_timestamp = video->m_pts;
- }
- else
- {
- video->m_pts = m_last_video_timestamp;
- }
-
- // update video clock for next frame
- boost::uint32_t frame_delay;
- if (!m_isFLV) frame_delay =
static_cast<boost::uint32_t>(as_double(m_video_stream->codec->time_base) *
1000.0);
- else frame_delay = m_parser->videoFrameDelay();
-
- // for MPEG2, the frame can be repeated, so we update the clock
accordingly
- frame_delay += static_cast<boost::uint32_t>(m_Frame->repeat_pict *
(frame_delay * 0.5) * 1000.0);
-
- m_last_video_timestamp += frame_delay;
-
- if (m_videoFrameFormat == render::YUV)
- {
- image::yuv* yuvframe = static_cast<image::yuv*>(tmpImage.get());
- unsigned int copied = 0;
- boost::uint8_t* ptr = video->m_data;
- for (int i = 0; i < 3 ; i++)
- {
- int shift = (i == 0 ? 0 : 1);
- boost::uint8_t* yuv_factor = m_Frame->data[i];
- int h = m_VCodecCtx->height >> shift;
- int w = m_VCodecCtx->width >> shift;
- for (int j = 0; j < h; j++)
- {
- copied += w;
- assert(copied <= yuvframe->size());
- memcpy(ptr, yuv_factor, w);
- yuv_factor += m_Frame->linesize[i];
- ptr += w;
- }
- }
- video->m_size = copied;
- }
- else if (m_videoFrameFormat == render::RGB)
- {
- AVPicture* src;
- if (m_VCodecCtx->pix_fmt != PIX_FMT_RGB24)
- {
- src = &rgbpicture;
- } else
- {
- src = (AVPicture*) m_Frame;
- }
-
- boost::uint8_t* srcptr = src->data[0];
- boost::uint8_t* srcend = srcptr + rgbpicture.linesize[0] *
m_VCodecCtx->height;
- boost::uint8_t* dstptr = video->m_data;
- unsigned int srcwidth = m_VCodecCtx->width * 3;
-
- video->m_size = 0;
-
- while (srcptr < srcend) {
- memcpy(dstptr, srcptr, srcwidth);
- srcptr += src->linesize[0];
- dstptr += srcwidth;
- video->m_size += srcwidth;
- }
-
- if (m_VCodecCtx->pix_fmt != PIX_FMT_RGB24) {
- delete [] rgbpicture.data[0];
- }
-
- }
-
- return video;
}
bool NetStreamFfmpeg::decodeMediaFrame()
{
return false;
-
-#if 0 // Only FLV for now (non-FLV should be threated the same as FLV, using a
MediaParser in place of the FLVParser)
-
- if (m_unqueued_data)
- {
- if (m_unqueued_data->m_stream_index == m_audio_index)
- {
- if (_soundHandler)
- {
- m_unqueued_data =
m_qaudio.push(m_unqueued_data) ? NULL : m_unqueued_data;
- }
- }
- else if (m_unqueued_data->m_stream_index == m_video_index)
- {
- m_unqueued_data = m_qvideo.push(m_unqueued_data) ? NULL
: m_unqueued_data;
- }
- else
- {
- log_error(_("read_frame: not audio & video stream"));
- }
- return true;
- }
-
- AVPacket packet;
-
- int rc = av_read_frame(m_FormatCtx, &packet);
-
- if (rc >= 0)
- {
- if (packet.stream_index == m_audio_index && _soundHandler)
- {
- media::raw_mediadata_t* audio = decodeAudio(&packet);
- if (!audio)
- {
- log_error(_("Problems decoding audio frame"));
- return false;
- }
- m_unqueued_data = m_qaudio.push(audio) ? NULL : audio;
- }
- else
- if (packet.stream_index == m_video_index)
- {
- media::raw_mediadata_t* video = decodeVideo(&packet);
- if (!video)
- {
- log_error(_("Problems decoding video frame"));
- return false;
- }
- m_unqueued_data = m_qvideo.push(video) ? NULL : video;
- }
- av_free_packet(&packet);
- }
- else
- {
- log_error(_("Problems decoding frame"));
- return false;
- }
-
- return true;
-#endif
}
void
@@ -1306,65 +681,10 @@
decodingStatus(DEC_BUFFERING);
// Seek to new position
- if (m_isFLV)
- {
newpos = m_parser->seek(pos);
log_debug("m_parser->seek(%d) returned %d", pos, newpos);
- }
- else if (m_FormatCtx)
- {
- AVStream* videostream = m_FormatCtx->streams[m_video_index];
- timebase = static_cast<double>(videostream->time_base.num /
videostream->time_base.den);
- newpos = static_cast<long>(pos / timebase);
-
- if (av_seek_frame(m_FormatCtx, m_video_index, newpos, 0) < 0)
- {
- log_error(_("%s: seeking failed"), __FUNCTION__);
- return;
- }
- }
- else
- {
- // TODO: should we log_debug ??
- return;
- }
-
- // This is kindof hackish and ugly :-(
- if (newpos == 0)
- {
- m_last_video_timestamp = 0;
- m_last_audio_timestamp = 0;
- }
- else if (m_isFLV)
- {
- if (m_ACodecCtx) m_last_audio_timestamp = newpos;
- if (m_VCodecCtx) m_last_video_timestamp = newpos;
- }
- else
- {
- AVPacket Packet;
- av_init_packet(&Packet);
- double newtime = 0;
- while (newtime == 0)
- {
- if (av_read_frame(m_FormatCtx, &Packet) < 0)
- {
- av_seek_frame(m_FormatCtx, -1, 0,
AVSEEK_FLAG_BACKWARD);
- av_free_packet( &Packet );
- return;
- }
-
- newtime = timebase *
(double)m_FormatCtx->streams[m_video_index]->cur_dts;
- }
-
- av_free_packet( &Packet );
-
- av_seek_frame(m_FormatCtx, m_video_index, newpos, 0);
- newpos = static_cast<boost::int32_t>(newtime / 1000.0);
- m_last_audio_timestamp = newpos;
- m_last_video_timestamp = newpos;
- }
+ m_last_audio_timestamp = m_last_video_timestamp = newpos;
{ // cleanup audio queue, so won't be consumed while seeking
boost::mutex::scoped_lock lock(_audioQueueMutex);
@@ -1564,10 +884,10 @@
#endif // GNASH_DEBUG_DECODING
// Get next decoded video frame from parser, will have the lowest
timestamp
- media::raw_mediadata_t* video = getDecodedVideoFrame(curPos);
+ std::auto_ptr<image::rgb> video = getDecodedVideoFrame(curPos);
// to be decoded or we're out of data
- if (!video)
+ if (!video.get())
{
if ( decodingStatus() == DEC_STOPPED )
{
@@ -1598,23 +918,7 @@
}
else
{
-
- if (m_videoFrameFormat == render::YUV)
- {
- if ( ! m_imageframe ) m_imageframe = new
image::yuv(m_VCodecCtx->width, m_VCodecCtx->height);
- // XXX m_imageframe might be a byte aligned buffer,
while video is not!
-
static_cast<image::yuv*>(m_imageframe)->update(video->m_data);
- }
- else if (m_videoFrameFormat == render::RGB)
- {
- if ( ! m_imageframe ) m_imageframe = new
image::rgb(m_VCodecCtx->width, m_VCodecCtx->height);
- image::rgb* imgframe =
static_cast<image::rgb*>(m_imageframe);
- rgbcopy(imgframe, video, m_VCodecCtx->width * 3);
- }
-
- // Delete the frame from the queue
- delete video;
-
+ m_imageframe = video.release(); // ownership transferred
// A frame is ready for pickup
m_newFrameReady = true;
}
Index: server/asobj/NetStreamFfmpeg.h
===================================================================
RCS file: /sources/gnash/gnash/server/asobj/NetStreamFfmpeg.h,v
retrieving revision 1.71
retrieving revision 1.72
diff -u -b -r1.71 -r1.72
--- server/asobj/NetStreamFfmpeg.h 2 Jun 2008 20:15:21 -0000 1.71
+++ server/asobj/NetStreamFfmpeg.h 3 Jun 2008 16:11:45 -0000 1.72
@@ -29,6 +29,19 @@
#define __STDC_CONSTANT_MACROS
#endif
+#include "impl.h" // what for ? drop ?
+#include "VideoDecoder.h" // for visibility of dtor
+#include "AudioDecoder.h" // for visibility of dtor
+
+#include "image.h"
+#include "StreamProvider.h"
+#include "NetStream.h" // for inheritance
+#include "VirtualClock.h"
+
+// TODO: drop ffmpeg-specific stuff
+#include "ffmpegNetStreamUtil.h"
+
+
#include <queue>
#include <boost/thread/thread.hpp>
#include <boost/bind.hpp>
@@ -39,8 +52,8 @@
#include <memory>
#include <cassert>
-#include "impl.h"
+// TODO: drop ffmpeg-specific stuff here ?
#ifdef HAVE_FFMPEG_AVFORMAT_H
extern "C" {
#include <ffmpeg/avformat.h>
@@ -53,13 +66,6 @@
}
#endif
-#include "image.h"
-#include "StreamProvider.h"
-#include "NetStream.h" // for inheritance
-#include "VirtualClock.h"
-
-#include "ffmpegNetStreamUtil.h"
-
/// Uncomment the following to load media in a separate thread
//#define LOAD_MEDIA_IN_A_SEPARATE_THREAD
@@ -142,6 +148,18 @@
DEC_BUFFERING,
};
+ /// Gets video info from the parser and initializes _videoDecoder
+ //
+ /// @param parser the parser to use to get video information.
+ ///
+ void initVideoDecoder(media::MediaParser& parser);
+
+ /// Gets audio info from the parser and initializes _audioDecoder
+ //
+ /// @param parser the parser to use to get audio information.
+ ///
+ void initAudioDecoder(media::MediaParser& parser);
+
DecodingState _decoding_state;
// Mutex protecting _playback_state and _decoding_state
@@ -192,40 +210,11 @@
// Used to decode and push the next available (non-FLV) frame to the
audio or video queue
bool decodeMediaFrame();
- /// Used to push decoded version of next available FLV frame to the
audio or video queue
- //
- /// Called by ::av_streamer to buffer more a/v frames when possible.
- ///
- /// Will call decodeVideo or decodeAudio depending on frame type, and
return
- /// what they return.
- /// Will set decodingStatus to DEC_BUFFERING when starving on input
- ///
- /// This is a blocking call.
- //
- /// @returns :
- /// If next frame is video and:
- /// - we have no video decoding context
- /// - or there is a decoding error
- /// - or there is a conversion error
- /// - or renderer requested format is NONE
- /// ... false will be returned.
- /// In any other case, true is returned.
- ///
- /// NOTE: if EOF is reached, true is returned by decodingStatus is set
to DEC_STOPPED
- ///
- /// NOTE: (FIXME) if we succeeded decoding but the relative queue was
full,
- /// true will be returned but nothing would be pushed on the
queues.
- ///
- /// TODO: return a more informative value to tell what happened.
- /// TODO: make it simpler !
- ///
- bool decodeFLVFrame();
-
/// Decode next video frame fetching it MediaParser cursor
//
/// @return 0 on EOF or error, a decoded video otherwise
///
- media::raw_mediadata_t* decodeNextVideoFrame();
+ std::auto_ptr<image::rgb> decodeNextVideoFrame();
/// Decode next audio frame fetching it MediaParser cursor
//
@@ -248,29 +237,7 @@
/// 3. next element in cursor has timestamp > tx
/// 4. there was an error decoding
///
- media::raw_mediadata_t* getDecodedVideoFrame(boost::uint32_t ts);
-
- /// Used to decode a video frame
- //
- /// This is a blocking call.
- /// If no Video decoding context exists (m_VCodecCtx), 0 is returned.
- /// On decoding (or converting) error, 0 is returned.
- /// If renderer requested video format is render::NONE, 0 is returned.
- /// In any other case, a decoded video frame is returned.
- ///
- /// TODO: return a more informative value to tell what happened.
- ///
- media::raw_mediadata_t* decodeVideo( AVPacket* packet );
-
- /// Used to decode an audio frame
- //
- /// This is a blocking call.
- /// If no Video decoding context exists (m_ACodecCtx), 0 is returned.
- /// In any other case, a decoded audio frame is returned.
- ///
- /// TODO: return a more informative value to tell what happened.
- ///
- media::raw_mediadata_t* decodeAudio( AVPacket* packet );
+ std::auto_ptr<image::rgb> getDecodedVideoFrame(boost::uint32_t ts);
// Used to calculate a decimal value from a ffmpeg fraction
inline double as_double(AVRational time)
@@ -280,24 +247,11 @@
DecodingState decodingStatus(DecodingState newstate = DEC_NONE);
- int m_video_index;
- int m_audio_index;
-
- // video
- AVCodecContext* m_VCodecCtx;
- AVStream* m_video_stream;
-
- // audio
- AVCodecContext *m_ACodecCtx;
- AVStream* m_audio_stream;
-
- // the format (mp3, avi, etc.)
- AVFormatContext *m_FormatCtx;
-
- AVFrame* m_Frame;
+ /// Video decoder
+ std::auto_ptr<media::VideoDecoder> _videoDecoder;
- // Use for resampling audio
- media::AudioResampler _resampler;
+ /// Audio decoder
+ std::auto_ptr<media::AudioDecoder> _audioDecoder;
#ifdef LOAD_MEDIA_IN_A_SEPARATE_THREAD
/// The parser thread
Index: server/parser/video_stream_def.cpp
===================================================================
RCS file: /sources/gnash/gnash/server/parser/video_stream_def.cpp,v
retrieving revision 1.47
retrieving revision 1.48
diff -u -b -r1.47 -r1.48
--- server/parser/video_stream_def.cpp 3 Jun 2008 14:48:55 -0000 1.47
+++ server/parser/video_stream_def.cpp 3 Jun 2008 16:11:45 -0000 1.48
@@ -22,6 +22,8 @@
#include "render.h"
#include "BitsReader.h"
#include "MediaHandler.h"
+#include "MediaParser.h" // for VideoInfo
+#include "VideoDecoder.h"
#include <boost/bind.hpp>
@@ -90,7 +92,8 @@
return;
}
- _decoder = mh->createVideoDecoder(m_codec_id, _width, _height);
+ media::VideoInfo info(m_codec_id, _width, _height, 0 /*framerate*/, 0
/*duration*/, media::FLASH /*typei*/);
+ _decoder = mh->createVideoDecoder(info);
if ( ! _decoder.get() )
{
log_error(_("Could not create video decoder for codec id %d"),
- [Gnash-commit] gnash ChangeLog libmedia/AudioDecoder.h libmedi...,
Sandro Santilli <=