[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Gnash-commit] gnash ChangeLog libmedia/FLVParser.cpp libmedia...
From: |
Sandro Santilli |
Subject: |
[Gnash-commit] gnash ChangeLog libmedia/FLVParser.cpp libmedia... |
Date: |
Fri, 06 Jun 2008 16:45:10 +0000 |
CVSROOT: /sources/gnash
Module name: gnash
Changes by: Sandro Santilli <strk> 08/06/06 16:45:09
Modified files:
. : ChangeLog
libmedia : FLVParser.cpp FLVParser.h Makefile.am
MediaParser.h
libmedia/ffmpeg: MediaParserFfmpeg.cpp MediaParserFfmpeg.h
server/asobj : NetStream.cpp NetStreamFfmpeg.cpp
NetStreamFfmpeg.h
Added files:
libmedia : MediaParser.cpp
Log message:
* libmedia/: Makefile.am, MediaParser.{cpp,h}:
Change MediaParser so that "The Buffer" is queues
of encoded frames, move all buffer inspectors
in the base class.
* libmedia/FLVParser.{cpp,h}: drop all buffer inspectors
(now in base class), reimplemnet fillers to push encoded
frames.
* libmedia/ffmpeg/MediaParserFfmpeg.{cpp,h}: drop all
buffer inspectors (now in base class), reimplement
fillers to push encoded frames.
* server/asobj/NetStream.cpp: minor comment cleanup
* server/asobj/NetStreamFfmpeg.h: don't override bufferLength,
the one in NetStream is kind of fine for now (not 100%,
but could be when PlayHead will be in the base class).
* server/asobj/NetStreamFfmpeg.cpp: drop bufferLength override,
cleanup some debugging calls.
CVSWeb URLs:
http://cvs.savannah.gnu.org/viewcvs/gnash/ChangeLog?cvsroot=gnash&r1=1.6836&r2=1.6837
http://cvs.savannah.gnu.org/viewcvs/gnash/libmedia/FLVParser.cpp?cvsroot=gnash&r1=1.12&r2=1.13
http://cvs.savannah.gnu.org/viewcvs/gnash/libmedia/FLVParser.h?cvsroot=gnash&r1=1.14&r2=1.15
http://cvs.savannah.gnu.org/viewcvs/gnash/libmedia/Makefile.am?cvsroot=gnash&r1=1.24&r2=1.25
http://cvs.savannah.gnu.org/viewcvs/gnash/libmedia/MediaParser.h?cvsroot=gnash&r1=1.18&r2=1.19
http://cvs.savannah.gnu.org/viewcvs/gnash/libmedia/MediaParser.cpp?cvsroot=gnash&rev=1.1
http://cvs.savannah.gnu.org/viewcvs/gnash/libmedia/ffmpeg/MediaParserFfmpeg.cpp?cvsroot=gnash&r1=1.6&r2=1.7
http://cvs.savannah.gnu.org/viewcvs/gnash/libmedia/ffmpeg/MediaParserFfmpeg.h?cvsroot=gnash&r1=1.6&r2=1.7
http://cvs.savannah.gnu.org/viewcvs/gnash/server/asobj/NetStream.cpp?cvsroot=gnash&r1=1.96&r2=1.97
http://cvs.savannah.gnu.org/viewcvs/gnash/server/asobj/NetStreamFfmpeg.cpp?cvsroot=gnash&r1=1.145&r2=1.146
http://cvs.savannah.gnu.org/viewcvs/gnash/server/asobj/NetStreamFfmpeg.h?cvsroot=gnash&r1=1.73&r2=1.74
Patches:
Index: ChangeLog
===================================================================
RCS file: /sources/gnash/gnash/ChangeLog,v
retrieving revision 1.6836
retrieving revision 1.6837
diff -u -b -r1.6836 -r1.6837
--- ChangeLog 6 Jun 2008 14:21:33 -0000 1.6836
+++ ChangeLog 6 Jun 2008 16:45:07 -0000 1.6837
@@ -1,3 +1,22 @@
+2008-06-06 Sandro Santilli <address@hidden>
+
+ * libmedia/: Makefile.am, MediaParser.{cpp,h}:
+ Change MediaParser so that "The Buffer" is queues
+ of encoded frames, move all buffer inspectors
+ in the base class.
+ * libmedia/FLVParser.{cpp,h}: drop all buffer inspectors
+ (now in base class), reimplemnet fillers to push encoded
+ frames.
+ * libmedia/ffmpeg/MediaParserFfmpeg.{cpp,h}: drop all
+ buffer inspectors (now in base class), reimplement
+ fillers to push encoded frames.
+ * server/asobj/NetStream.cpp: minor comment cleanup
+ * server/asobj/NetStreamFfmpeg.h: don't override bufferLength,
+ the one in NetStream is kind of fine for now (not 100%,
+ but could be when PlayHead will be in the base class).
+ * server/asobj/NetStreamFfmpeg.cpp: drop bufferLength override,
+ cleanup some debugging calls.
+
2008-06-06 Rob Savoye <address@hidden>
* libamf/amf.cpp: Add support for NULL objects.
Index: libmedia/FLVParser.cpp
===================================================================
RCS file: /sources/gnash/gnash/libmedia/FLVParser.cpp,v
retrieving revision 1.12
retrieving revision 1.13
diff -u -b -r1.12 -r1.13
--- libmedia/FLVParser.cpp 4 Jun 2008 17:13:35 -0000 1.12
+++ libmedia/FLVParser.cpp 6 Jun 2008 16:45:08 -0000 1.13
@@ -18,12 +18,14 @@
//
-#include <string>
-#include <iosfwd>
#include "FLVParser.h"
#include "amf.h"
#include "log.h"
#include "utility.h"
+#include "GnashException.h"
+
+#include <string>
+#include <iosfwd>
using namespace std;
@@ -36,481 +38,50 @@
namespace gnash {
namespace media {
-static std::auto_ptr<EncodedVideoFrame>
-makeVideoFrame(tu_file& in, const FLVVideoFrameInfo& frameInfo)
-{
- std::auto_ptr<EncodedVideoFrame> frame;
-
- boost::uint32_t dataSize = frameInfo.dataSize;
- boost::uint64_t timestamp = frameInfo.timestamp;
-
- if ( in.set_position(frameInfo.dataPosition) )
- {
- log_error(_("Failed seeking to videoframe in FLV input"));
- return frame;
- }
-
- unsigned long int chunkSize = smallestMultipleContaining(READ_CHUNKS,
dataSize+PADDING_BYTES);
-
- boost::uint8_t* data = new boost::uint8_t[chunkSize];
- size_t bytesread = in.read_bytes(data, dataSize);
-
- unsigned long int padding = chunkSize-dataSize;
- assert(padding);
- memset(data + bytesread, 0, padding);
-
- // We won't need frameNum, so will set to zero...
- // TODO: fix this ?
- // NOTE: ownership of 'data' is transferred here
- frame.reset( new EncodedVideoFrame(data, dataSize, 0, timestamp) );
- return frame;
-}
-
-static std::auto_ptr<EncodedAudioFrame>
-makeAudioFrame(tu_file& in, const FLVAudioFrameInfo& frameInfo)
-{
- std::auto_ptr<EncodedAudioFrame> frame ( new EncodedAudioFrame );
- frame->dataSize = frameInfo.dataSize;
- frame->timestamp = frameInfo.timestamp;
-
-
- if ( in.set_position(frameInfo.dataPosition) )
- {
- log_error(_("Failed seeking to audioframe in FLV input"));
- frame.reset();
- return frame;
- }
-
- unsigned long int dataSize = frameInfo.dataSize;
- unsigned long int chunkSize = smallestMultipleContaining(READ_CHUNKS,
dataSize+PADDING_BYTES);
-
- frame->data.reset( new boost::uint8_t[chunkSize] );
- size_t bytesread = in.read_bytes(frame->data.get(), dataSize);
-
- unsigned long int padding = chunkSize-dataSize;
- assert(padding);
- memset(frame->data.get() + bytesread, 0, padding);
-
- return frame;
-}
-
FLVParser::FLVParser(std::auto_ptr<tu_file> lt)
:
MediaParser(lt),
_lastParsedPosition(0),
- _videoInfo(NULL),
- _audioInfo(NULL),
_nextAudioFrame(0),
_nextVideoFrame(0),
_audio(false),
_video(false)
{
+ if ( ! parseHeader() )
+ throw GnashException("FLVParser couldn't parse header from
input");
}
FLVParser::~FLVParser()
{
- for (VideoFrames::iterator i=_videoFrames.begin(),
- e=_videoFrames.end(); i!=e; ++i)
- {
- delete (*i);
- }
-
- for (AudioFrames::iterator i=_audioFrames.begin(),
- e=_audioFrames.end(); i!=e; ++i)
- {
- delete (*i);
- }
+ // nothing to do here, all done in base class now
}
boost::uint32_t
-FLVParser::getBufferLength()
+FLVParser::seek(boost::uint32_t /*time*/)
{
- // TODO: figure wheter and why we should privilege
- // video frames over audio frames when both
- // are available
- // I belive the corrent behaviour here would
- // be using the smallest max-timestamp..
-
- if (_video && !_videoFrames.empty())
- {
- return _videoFrames.back()->timestamp;
- }
-
- if (_audio && ! _audioFrames.empty())
- {
- return _audioFrames.back()->timestamp;
- }
+ LOG_ONCE( log_unimpl("%s", __PRETTY_FUNCTION__) );
+ // In particular, what to do if there's no frames in queue ?
+ // just seek to the the later available first timestamp
return 0;
}
-boost::uint16_t
-FLVParser::videoFrameRate()
-{
- // Make sure that there are parsed some frames
- while(_videoFrames.size() < 2 && !_parsingComplete) {
- parseNextTag();
- }
-
- if (_videoFrames.size() < 2) return 0;
-
- boost::uint32_t framedelay = _videoFrames[1]->timestamp -
_videoFrames[0]->timestamp;
-
- return static_cast<boost::int16_t>(1000 / framedelay);
-}
-
-
-boost::uint32_t
-FLVParser::videoFrameDelay()
-{
- // If there are no video in this FLV return 0
- if (!_video && _lastParsedPosition > 0) return 0;
-
- // Make sure that there are parsed some frames
- while(_videoFrames.size() < 2 && !_parsingComplete) {
- parseNextTag();
- }
-
- // If there is no video data return 0
- if (_videoFrames.size() == 0 || !_video || _nextVideoFrame < 2) return
0;
-
- return _videoFrames[_nextVideoFrame-1]->timestamp -
_videoFrames[_nextVideoFrame-2]->timestamp;
-}
-
-boost::uint32_t
-FLVParser::audioFrameDelay()
-{
- // If there are no audio in this FLV return 0
- if (!_audio && _lastParsedPosition > 0) return 0;
-
- // Make sure that there are parsed some frames
- while(_audioFrames.size() < 2 && !_parsingComplete) {
- parseNextTag();
- }
-
- // If there is no video data return 0
- if (_audioFrames.size() == 0 || !_audio || _nextAudioFrame < 2) return
0;
-
- return _audioFrames[_nextAudioFrame-1]->timestamp -
_audioFrames[_nextAudioFrame-2]->timestamp;
-}
bool
-FLVParser::nextAudioFrameTimestamp(boost::uint64_t& ts)
+FLVParser::parseNextChunk()
{
- // If there are no audio in this FLV return NULL
- //
- // TODO: FIXME: the condition assumes that if _lastParsedPosition > 0
- // we had a chance to figure if video was present !
- //
- if (!_audio && _lastParsedPosition > 0)
- {
- return false;
- }
-
- // Make sure that there are parsed enough frames to return the need
frame
- while(_audioFrames.size() <= _nextAudioFrame && !_parsingComplete) {
- if (!parseNextTag()) break;
- }
-
- // If the needed frame can't be parsed (EOF reached) return NULL
- if (_audioFrames.empty() || _audioFrames.size() <= _nextAudioFrame)
+ static const int tagsPerChunk=10;
+ for (int i=0; i<tagsPerChunk; ++i)
{
- return false;
+ if ( ! parseNextTag() ) return false;
}
-
- FLVAudioFrameInfo* info = _audioFrames[_nextAudioFrame];
- ts = info->timestamp;
return true;
}
-std::auto_ptr<EncodedAudioFrame>
-FLVParser::nextAudioFrame()
-{
- std::auto_ptr<EncodedAudioFrame> frame;
-
- FLVAudioFrameInfo* frameInfo = peekNextAudioFrameInfo();
- if ( ! frameInfo ) return frame;
-
- frame = makeAudioFrame(*_stream, *frameInfo);
- if ( ! frame.get() )
- {
- log_error("Could not make audio frame %d", _nextAudioFrame);
- return frame;
- }
-
- _nextAudioFrame++;
- return frame;
-}
-
-bool
-FLVParser::nextVideoFrameTimestamp(boost::uint64_t& ts)
-{
- // If there are no video in this FLV return NULL
- //
- // TODO: FIXME: the condition assumes that if _lastParsedPosition > 0
- // we had a chance to figure if video was present !
- //
- if (!_video && _lastParsedPosition > 0)
- {
- return false;
- }
-
- // Make sure that there are parsed enough frames to return the need
frame
- while(_videoFrames.size() <=
static_cast<boost::uint32_t>(_nextVideoFrame) && !_parsingComplete)
- {
- if (!parseNextTag()) break;
- }
-
- // If the needed frame can't be parsed (EOF reached) return NULL
- if (_videoFrames.empty() || _videoFrames.size() <= _nextVideoFrame)
- {
- //gnash::log_debug("The needed frame (%d) can't be parsed (EOF
reached)", _lastVideoFrame);
- return false;
- }
-
- FLVVideoFrameInfo* info = _videoFrames[_nextVideoFrame];
- ts = info->timestamp;
- return true;
-}
-
-std::auto_ptr<EncodedVideoFrame>
-FLVParser::nextVideoFrame()
-{
- FLVVideoFrameInfo* frameInfo = peekNextVideoFrameInfo();
- std::auto_ptr<EncodedVideoFrame> frame = makeVideoFrame(*_stream,
*frameInfo);
- if ( ! frame.get() )
- {
- log_error("Could not make video frame %d", _nextVideoFrame);
- return frame;
- }
-
- _nextVideoFrame++;
- return frame;
-}
-
-
-boost::uint32_t
-FLVParser::seekAudio(boost::uint32_t time)
-{
-
- // If there is no audio data return NULL
- if (_audioFrames.empty()) return 0;
-
- // If there are no audio greater than the given time
- // the last audioframe is returned
- FLVAudioFrameInfo* lastFrame = _audioFrames.back();
- if (lastFrame->timestamp < time) {
- _nextAudioFrame = _audioFrames.size() - 1;
- return lastFrame->timestamp;
- }
-
- // We try to guess where in the vector the audioframe
- // with the correct timestamp is
- size_t numFrames = _audioFrames.size();
- double tpf = lastFrame->timestamp / numFrames; // time per frame
- size_t guess = size_t(time / tpf);
-
- // Here we test if the guess was ok, and adjust if needed.
- size_t bestFrame = utility::clamp<size_t>(guess, 0,
_audioFrames.size()-1);
-
- // Here we test if the guess was ok, and adjust if needed.
- long diff = _audioFrames[bestFrame]->timestamp - time;
- if ( diff > 0 ) // our guess was too long
- {
- while ( bestFrame > 0 && _audioFrames[bestFrame-1]->timestamp >
time ) --bestFrame;
- }
- else // our guess was too short
- {
- while ( bestFrame < _audioFrames.size()-1 &&
_audioFrames[bestFrame+1]->timestamp < time ) ++bestFrame;
- }
-
-#ifdef GNASH_DEBUG_SEEK
- gnash::log_debug("Seek (audio): " SIZET_FMT "/" SIZET_FMT " (%u/%u)",
bestFrame, numFrames, _audioFrames[bestFrame]->timestamp, time);
-#endif
- _nextAudioFrame = bestFrame;
- return _audioFrames[bestFrame]->timestamp;
-
-}
-
-
-boost::uint32_t
-FLVParser::seekVideo(boost::uint32_t time)
-{
- if ( _videoFrames.empty() ) return 0;
-
- // If there are no videoframe greater than the given time
- // the last key videoframe is returned
- FLVVideoFrameInfo* lastFrame = _videoFrames.back();
- size_t numFrames = _videoFrames.size();
- if (lastFrame->timestamp < time)
- {
- size_t lastFrameNum = numFrames -1;
- while (! lastFrame->isKeyFrame() )
- {
- lastFrameNum--;
- lastFrame = _videoFrames[lastFrameNum];
- }
-
- _nextVideoFrame = lastFrameNum;
- return lastFrame->timestamp;
-
- }
-
- // We try to guess where in the vector the videoframe
- // with the correct timestamp is
- double tpf = lastFrame->timestamp / numFrames; // time per frame
- size_t guess = size_t(time / tpf);
-
- size_t bestFrame = utility::clamp<size_t>(guess, 0,
_videoFrames.size()-1);
-
- // Here we test if the guess was ok, and adjust if needed.
- long diff = _videoFrames[bestFrame]->timestamp - time;
- if ( diff > 0 ) // our guess was too long
- {
- while ( bestFrame > 0 && _videoFrames[bestFrame-1]->timestamp >
time ) --bestFrame;
- }
- else // our guess was too short
- {
- while ( bestFrame < _videoFrames.size()-1 &&
_videoFrames[bestFrame+1]->timestamp < time ) ++bestFrame;
- }
-
- // Find closest backward keyframe
- size_t rewindKeyframe = bestFrame;
- while ( rewindKeyframe && ! _videoFrames[rewindKeyframe]->isKeyFrame() )
- {
- rewindKeyframe--;
- }
-
- // Find closest forward keyframe
- size_t forwardKeyframe = bestFrame;
- size_t size = _videoFrames.size();
- while (size > forwardKeyframe+1 && !
_videoFrames[forwardKeyframe]->isKeyFrame() )
- {
- forwardKeyframe++;
- }
-
- // We can't ensure we were able to find a key frame *after* the best
position
- // in that case we just use any previous keyframe instead..
- if ( ! _videoFrames[forwardKeyframe]->isKeyFrame() )
- {
- bestFrame = rewindKeyframe;
- }
- else
- {
- boost::int32_t forwardDiff =
_videoFrames[forwardKeyframe]->timestamp - time;
- boost::int32_t rewindDiff = time -
_videoFrames[rewindKeyframe]->timestamp;
-
- if (forwardDiff < rewindDiff) bestFrame = forwardKeyframe;
- else bestFrame = rewindKeyframe;
- }
-
-#ifdef GNASH_DEBUG_SEEK
- gnash::log_debug("Seek (video): " SIZET_FMT "/" SIZET_FMT " (%u/%u)",
bestFrame, numFrames, _videoFrames[bestFrame]->timestamp, time);
-#endif
-
- _nextVideoFrame = bestFrame;
- assert( _videoFrames[bestFrame]->isKeyFrame() );
- return _videoFrames[bestFrame]->timestamp;
-}
-
-
-
-VideoInfo*
-FLVParser::getVideoInfo()
-{
- // If there are no video in this FLV return NULL
- if (!_video && _lastParsedPosition > 0) return NULL;
-
- // Make sure that there are parsed some video frames
- while( ! _parsingComplete && !_videoInfo.get() ) parseNextTag();
-
- return _videoInfo.get(); // may be null
-}
-
-AudioInfo*
-FLVParser::getAudioInfo()
-{
- // If there are no audio in this FLV return NULL
- if (!_audio && _lastParsedPosition > 0) return NULL;
-
- // Make sure that there are parsed some audio frames
- while (!_parsingComplete && ! _audioInfo.get() )
- {
- parseNextTag();
- }
-
- return _audioInfo.get(); // may be null
-}
-
-bool
-FLVParser::isTimeLoaded(boost::uint32_t time)
-{
- // Parse frames until the need time is found, or EOF
- while (!_parsingComplete) {
- if (!parseNextTag()) break;
- if ((_videoFrames.size() > 0 && _videoFrames.back()->timestamp
>= time)
- || (_audioFrames.size() > 0 &&
_audioFrames.back()->timestamp >= time)) {
- return true;
- }
- }
-
- if (_videoFrames.size() > 0 && _videoFrames.back()->timestamp >= time) {
- return true;
- }
-
- if (_audioFrames.size() > 0 && _audioFrames.back()->timestamp >= time) {
- return true;
- }
-
- return false;
-
-}
-
-boost::uint32_t
-FLVParser::seek(boost::uint32_t time)
-{
- GNASH_REPORT_FUNCTION;
-
- log_debug("FLVParser::seek(%d) ", time);
-
- if (time == 0) {
- if (_video) _nextVideoFrame = 0;
- if (_audio) _nextAudioFrame = 0;
- }
-
- // Video, if present, has more constraints
- // as to where we allow seeking (we only
- // allow seek to closest *key* frame).
- // So we first have video seeking tell us
- // what time is best for that, and next
- // we seek audio on that time
-
- if (_video)
- {
- time = seekVideo(time);
-#ifdef GNASH_DEBUG_SEEK
- log_debug(" seekVideo -> %d", time);
-#endif
- }
-
- if (_audio)
- {
- time = seekAudio(time);
-#ifdef GNASH_DEBUG_SEEK
- log_debug(" seekAudio -> %d", time);
-#endif
- }
-
- return time;
-}
-
bool FLVParser::parseNextTag()
{
if ( _parsingComplete ) return false;
- // Parse the header if not done already. If unsuccesfull return false.
- if (_lastParsedPosition == 0 && !parseHeader()) return false;
-
// Seek to next frame and skip the size of the last tag
if ( _stream->set_position(_lastParsedPosition+4) )
{
@@ -549,11 +120,9 @@
if (tag[0] == AUDIO_TAG)
{
- FLVAudioFrameInfo* frame = new FLVAudioFrameInfo;
- frame->dataSize = bodyLength - 1;
- frame->timestamp = timestamp;
- frame->dataPosition = _stream->get_position();
- _audioFrames.push_back(frame);
+ std::auto_ptr<EncodedAudioFrame> frame =
readAudioFrame(bodyLength-1, timestamp);
+ if ( ! frame.get() ) { log_error("could not read audio
frame?"); }
+ else _audioFrames.push_back(frame.release());
// If this is the first audioframe no info about the
// audio format has been noted, so we do that now
@@ -576,12 +145,14 @@
}
else if (tag[0] == VIDEO_TAG)
{
- FLVVideoFrameInfo* frame = new FLVVideoFrameInfo;
- frame->dataSize = bodyLength - 1;
- frame->timestamp = timestamp;
- frame->dataPosition = _stream->get_position();
- frame->frameType = (tag[11] & 0xf0) >> 4;
- _videoFrames.push_back(frame);
+ bool isKeyFrame = (tag[11] & 0xf0) >> 4;
+ UNUSED(isKeyFrame); // may be used for building seekable
indexes...
+
+ size_t dataPosition = _stream->get_position();
+
+ std::auto_ptr<EncodedVideoFrame> frame =
readVideoFrame(bodyLength-1, timestamp);
+ if ( ! frame.get() ) { log_error("could not read video
frame?"); }
+ else _videoFrames.push_back(frame.release());
// If this is the first videoframe no info about the
// video format has been noted, so we do that now
@@ -594,14 +165,21 @@
// Extract the video size from the videodata header
if (codec == VIDEO_CODEC_H263) {
- if ( _stream->set_position(frame->dataPosition)
)
- {
+
+ // We're going to re-read some data here
+ // (can likely avoid with a better cleanup)
+
+ size_t bkpos = _stream->get_position();
+ if ( _stream->set_position(dataPosition) ) {
log_error(" Couldn't seek to VideoTag
data position");
_parsingComplete=true;
return false;
}
boost::uint8_t videohead[12];
+
int actuallyRead =
_stream->read_bytes(videohead, 12);
+ _stream->set_position(bkpos); // rewind
+
if ( actuallyRead < 12 )
{
log_error("FLVParser::parseNextTag: can't read H263 video
header (needed 12 bytes, only got %d)", actuallyRead);
@@ -686,6 +264,7 @@
log_error("FLVParser::parseHeader: couldn't read 9 bytes of
header");
return false;
}
+ _lastParsedPosition = 9;
// Check if this is really a FLV file
if (header[0] != 'F' || header[1] != 'L' || header[2] != 'V') return
false;
@@ -698,7 +277,18 @@
log_debug("Parsing FLV version %d, audio:%d, video:%d", version,
_audio, _video);
- _lastParsedPosition = 9;
+ // Make sure we initialize audio/video info (if any)
+ while ( !_parsingComplete && (_video && !_videoInfo.get()) || (_audio
&& !_audioInfo.get()) )
+ {
+ parseNextTag();
+ }
+
+ if ( _video && !_videoInfo.get() )
+ log_error(" couldn't find any video frame in an FLV advertising
video in header");
+
+ if ( _audio && !_audioInfo.get() )
+ log_error(" couldn't find any audio frame in an FLV advertising
audio in header");
+
return true;
}
@@ -714,52 +304,56 @@
return _lastParsedPosition;
}
-/* private */
-FLVAudioFrameInfo*
-FLVParser::peekNextAudioFrameInfo()
+/*private*/
+std::auto_ptr<EncodedAudioFrame>
+FLVParser::readAudioFrame(boost::uint32_t dataSize, boost::uint32_t timestamp)
{
- // If there are no audio in this FLV return NULL
- if (!_audio && _lastParsedPosition > 0) return 0;
+ tu_file& in = *_stream;
- // Make sure that there are parsed enough frames to return the need
frame
- while(_audioFrames.size() <= _nextAudioFrame && !_parsingComplete) {
- if (!parseNextTag()) break;
- }
+ //log_debug("Reading the %dth audio frame, with data size %d, from
position %d", _audioFrames.size()+1, dataSize, in.get_position());
+
+ std::auto_ptr<EncodedAudioFrame> frame ( new EncodedAudioFrame );
+ frame->dataSize = dataSize;
+ frame->timestamp = timestamp;
- // If the needed frame can't be parsed (EOF reached) return NULL
- if (_audioFrames.empty() || _audioFrames.size() <= _nextAudioFrame)
+ unsigned long int chunkSize = smallestMultipleContaining(READ_CHUNKS,
dataSize+PADDING_BYTES);
+
+ frame->data.reset( new boost::uint8_t[chunkSize] );
+ size_t bytesread = in.read_bytes(frame->data.get(), dataSize);
+ if ( bytesread < dataSize )
{
- return 0;
+ log_error("FLVParser::readAudioFrame: could only read %d/%d
bytes", bytesread, dataSize);
}
- return _audioFrames[_nextAudioFrame];
+ unsigned long int padding = chunkSize-dataSize;
+ assert(padding);
+ memset(frame->data.get() + bytesread, 0, padding);
+
+ return frame;
}
/*private*/
-FLVVideoFrameInfo*
-FLVParser::peekNextVideoFrameInfo()
+std::auto_ptr<EncodedVideoFrame>
+FLVParser::readVideoFrame(boost::uint32_t dataSize, boost::uint32_t timestamp)
{
- // If there are no video in this FLV return NULL
- if (!_video && _lastParsedPosition > 0)
- {
- //gnash::log_debug("no video, or lastParserPosition > 0");
- return 0;
- }
+ tu_file& in = *_stream;
- // Make sure that there are parsed enough frames to return the need
frame
- while(_videoFrames.size() <=
static_cast<boost::uint32_t>(_nextVideoFrame) && !_parsingComplete)
- {
- if (!parseNextTag()) break;
- }
+ std::auto_ptr<EncodedVideoFrame> frame;
- // If the needed frame can't be parsed (EOF reached) return NULL
- if (_videoFrames.empty() || _videoFrames.size() <= _nextVideoFrame)
- {
- //gnash::log_debug("The needed frame (%d) can't be parsed (EOF
reached)", _lastVideoFrame);
- return 0;
- }
+ unsigned long int chunkSize = smallestMultipleContaining(READ_CHUNKS,
dataSize+PADDING_BYTES);
- return _videoFrames[_nextVideoFrame];
+ boost::uint8_t* data = new boost::uint8_t[chunkSize];
+ size_t bytesread = in.read_bytes(data, dataSize);
+
+ unsigned long int padding = chunkSize-dataSize;
+ assert(padding);
+ memset(data + bytesread, 0, padding);
+
+ // We won't need frameNum, so will set to zero...
+ // TODO: fix this ?
+ // NOTE: ownership of 'data' is transferred here
+ frame.reset( new EncodedVideoFrame(data, dataSize, 0, timestamp) );
+ return frame;
}
} // end of gnash::media namespace
Index: libmedia/FLVParser.h
===================================================================
RCS file: /sources/gnash/gnash/libmedia/FLVParser.h,v
retrieving revision 1.14
retrieving revision 1.15
diff -u -b -r1.14 -r1.15
--- libmedia/FLVParser.h 4 Jun 2008 17:08:45 -0000 1.14
+++ libmedia/FLVParser.h 6 Jun 2008 16:45:08 -0000 1.15
@@ -175,45 +175,6 @@
/// Kills the parser...
~FLVParser();
- // see dox in MediaParser.h
- bool nextAudioFrameTimestamp(boost::uint64_t& ts);
-
- // see dox in MediaParser.h
- bool nextVideoFrameTimestamp(boost::uint64_t& ts);
-
- // see dox in MediaParser.h
- std::auto_ptr<EncodedAudioFrame> nextAudioFrame();
-
- // see dox in MediaParser.h
- std::auto_ptr<EncodedVideoFrame> nextVideoFrame();
-
- /// Returns information about video in the stream.
- //
- /// The returned object is owned by the FLVParser object.
- /// Can return NULL if video contains NO video frames.
- /// Will block till either parsing finished or a video frame is found.
- ///
- VideoInfo* getVideoInfo();
-
- /// Returns a FLVAudioInfo class about the audiostream
- //
- /// TODO: return a more abstract AudioInfo
- ///
- AudioInfo* getAudioInfo();
-
- /// \brief
- /// Asks if a frame with with a timestamp larger than
- /// the given time is available.
- //
- /// If such a frame is not
- /// available in list of already the parsed frames, we
- /// parse some more. This is used to check how much is buffered.
- ///
- /// @param time
- /// Timestamp, in milliseconds.
- ///
- bool isTimeLoaded(boost::uint32_t time);
-
/// \brief
/// Seeks to the closest possible position the given position,
/// and returns the new position.
@@ -225,30 +186,7 @@
///
boost::uint32_t seek(boost::uint32_t);
- /// Returns the framedelay from the last to the current
- /// audioframe in milliseconds. This is used for framerate.
- //
- boost::uint32_t audioFrameDelay();
-
- /// \brief
- /// Returns the framedelay from the last to the current
- /// videoframe in milliseconds.
- //
- boost::uint32_t videoFrameDelay();
-
- /// Returns the framerate of the video
- //
- boost::uint16_t videoFrameRate();
-
- /// Returns the "bufferlength", meaning the differens between the
- /// current frames timestamp and the timestamp of the last parseable
- /// frame. Returns the difference in milliseconds.
- //
- boost::uint32_t getBufferLength();
-
- virtual bool parseNextChunk() {
- return parseNextTag();
- }
+ virtual bool parseNextChunk();
/// Parses next tag from the file
//
@@ -262,65 +200,15 @@
private:
- /// \brief
- /// Get info about the audio frame to return
- /// on nextAudioFrame() call
- //
- /// Returned object is owned by this class.
- ///
- FLVAudioFrameInfo* peekNextAudioFrameInfo();
-
- /// \brief
- /// Get info about the video frame to return
- /// on nextAudioFrame() call
- //
- /// Returned object is owned by this class.
- ///
- FLVVideoFrameInfo* peekNextVideoFrameInfo();
-
- /// seeks to the closest possible position the given position,
- /// and returns the new position.
- boost::uint32_t seekAudio(boost::uint32_t time);
-
- /// seeks to the closest possible position the given position,
- /// and returns the new position.
- boost::uint32_t seekVideo(boost::uint32_t time);
-
/// Parses the header of the file
bool parseHeader();
// Functions used to extract numbers from the file
inline boost::uint32_t getUInt24(boost::uint8_t* in);
- // NOTE: FLVVideoFrameInfo is a relatively small structure,
- // chances are keeping by value here would reduce
- // memory fragmentation with no big cost
- typedef std::vector<FLVVideoFrameInfo*> VideoFrames;
-
- /// list of videoframes, does no contain the frame data.
- //
- /// Elements owned by this class.
- VideoFrames _videoFrames;
-
- // NOTE: FLVAudioFrameInfo is a relatively small structure,
- // chances are keeping by value here would reduce
- // memory fragmentation with no big cost
- typedef std::vector<FLVAudioFrameInfo*> AudioFrames;
-
- /// list of audioframes, does no contain the frame data.
- //
- /// Elements owned by this class.
- AudioFrames _audioFrames;
-
/// The position where the parsing should continue from.
boost::uint64_t _lastParsedPosition;
- /// Info about the video stream (if any)
- std::auto_ptr<VideoInfo> _videoInfo;
-
- /// Info about the audio stream (if any)
- std::auto_ptr<AudioInfo> _audioInfo;
-
/// Audio frame cursor position
//
/// This is the video frame number that will
@@ -340,6 +228,10 @@
/// Audio stream is present
bool _video;
+
+ std::auto_ptr<EncodedAudioFrame> readAudioFrame(boost::uint32_t
dataSize, boost::uint32_t timestamp);
+
+ std::auto_ptr<EncodedVideoFrame> readVideoFrame(boost::uint32_t
dataSize, boost::uint32_t timestamp);
};
} // end of gnash::media namespace
Index: libmedia/Makefile.am
===================================================================
RCS file: /sources/gnash/gnash/libmedia/Makefile.am,v
retrieving revision 1.24
retrieving revision 1.25
diff -u -b -r1.24 -r1.25
--- libmedia/Makefile.am 4 Jun 2008 10:06:28 -0000 1.24
+++ libmedia/Makefile.am 6 Jun 2008 16:45:08 -0000 1.25
@@ -76,6 +76,7 @@
MediaHandler.cpp \
AudioDecoderNellymoser.cpp \
AudioDecoderSimple.cpp \
+ MediaParser.cpp \
FLVParser.cpp \
Util.cpp \
$(NULL)
Index: libmedia/MediaParser.h
===================================================================
RCS file: /sources/gnash/gnash/libmedia/MediaParser.h,v
retrieving revision 1.18
retrieving revision 1.19
diff -u -b -r1.18 -r1.19
--- libmedia/MediaParser.h 4 Jun 2008 10:06:29 -0000 1.18
+++ libmedia/MediaParser.h 6 Jun 2008 16:45:08 -0000 1.19
@@ -24,10 +24,11 @@
#include "gnashconfig.h"
#endif
+#include "tu_file.h" // for inlines
+
#include <boost/scoped_array.hpp>
#include <memory>
-
-#include "tu_file.h" // for inlines
+#include <deque>
namespace gnash {
namespace media {
@@ -237,20 +238,25 @@
// of subclasses will never be invoked, tipically resulting
// in memory leaks..
//
- virtual ~MediaParser() {};
+ virtual ~MediaParser();
- /// Returns the "bufferlength", meaning the difference between the
- /// current frames timestamp and the timestamp of the last parseable
- /// frame. Returns the difference in milliseconds.
+ /// Returns mininum length of available buffers in milliseconds
//
- virtual boost::uint32_t getBufferLength()=0;
+ /// TODO: FIXME: NOTE: this is currently used by NetStream.bufferLength
+ /// but is bogus as it doesn't take the *current* playhead cursor time
+ /// into account. A proper way would be having a getLastBufferTime ()
+ /// interface here, returning minimun timestamp of last available
+ /// frames and let NetSTream::bufferLength() use that with playhead
+ /// time to find out...
+ ///
+ boost::uint64_t getBufferLength() const;
/// Get timestamp of the video frame which would be returned on
nextVideoFrame
//
/// @return false if there no video frame left
/// (either none or no more)
///
- virtual bool nextVideoFrameTimestamp(boost::uint64_t& ts)=0;
+ bool nextVideoFrameTimestamp(boost::uint64_t& ts) const;
/// Returns the next video frame in the parsed buffer, advancing video
cursor.
//
@@ -259,14 +265,14 @@
/// you can check with parsingCompleted() to know wheter this is due to
/// EOF reached.
///
- virtual std::auto_ptr<EncodedVideoFrame> nextVideoFrame()=0;
+ std::auto_ptr<EncodedVideoFrame> nextVideoFrame();
/// Get timestamp of the audio frame which would be returned on
nextAudioFrame
//
/// @return false if there no video frame left
/// (either none or no more)
///
- virtual bool nextAudioFrameTimestamp(boost::uint64_t& ts)=0;
+ bool nextAudioFrameTimestamp(boost::uint64_t& ts) const;
/// Returns the next audio frame in the parsed buffer, advancing audio
cursor.
//
@@ -275,18 +281,22 @@
/// you can check with parsingCompleted() to know wheter this is due to
/// EOF reached.
///
- virtual std::auto_ptr<EncodedAudioFrame> nextAudioFrame()=0;
+ std::auto_ptr<EncodedAudioFrame> nextAudioFrame();
/// Is the input MP3?
//
/// @return if the input audio is MP3
///
+ /// TODO: drop ?
+ ///
bool isAudioMp3() { return _isAudioMp3; }
/// Is the input Nellymoser?
//
/// @return if the input audio is Nellymoser
///
+ /// TODO: drop ?
+ ///
bool isAudioNellymoser() { return _isAudioNellymoser; }
/// Returns a VideoInfo class about the videostream
@@ -294,14 +304,14 @@
/// @return a VideoInfo class about the videostream,
/// or zero if stream contains no video
///
- virtual VideoInfo* getVideoInfo() { return 0; }
+ VideoInfo* getVideoInfo() { return _videoInfo.get(); }
/// Returns a AudioInfo class about the audiostream
//
/// @return a AudioInfo class about the audiostream,
/// or zero if stream contains no audio
///
- virtual AudioInfo* getAudioInfo() { return 0; }
+ AudioInfo* getAudioInfo() { return _audioInfo.get(); }
/// Seeks to the closest possible position the given position.
//
@@ -361,10 +371,49 @@
protected:
+ typedef std::deque<EncodedVideoFrame*> VideoFrames;
+ typedef std::deque<EncodedAudioFrame*> AudioFrames;
+
+ /// Queue of video frames (the video buffer)
+ //
+ /// Elements owned by this class.
+ ///
+ VideoFrames _videoFrames;
+
+ /// Queue of audio frames (the audio buffer)
+ //
+ /// Elements owned by this class.
+ ///
+ AudioFrames _audioFrames;
+
+ /// Return pointer to next encoded video frame in buffer
+ //
+ /// If no video is present, or queue is empty, 0 is returned
+ ///
+ const EncodedVideoFrame* peekNextVideoFrame() const;
+
+ /// Return pointer to next encoded audio frame in buffer
+ //
+ /// If no video is present, or queue is empty, 0 is returned
+ ///
+ const EncodedAudioFrame* peekNextAudioFrame() const;
+
+ /// Info about the video stream (if any)
+ std::auto_ptr<VideoInfo> _videoInfo;
+
+ /// Info about the audio stream (if any)
+ std::auto_ptr<AudioInfo> _audioInfo;
+
/// Is the input audio MP3?
+ //
+ /// TODO: drop ?
+ ///
bool _isAudioMp3;
/// Is the input audio Nellymoser?
+ //
+ /// TODO: drop ?
+ ///
bool _isAudioNellymoser;
/// The stream used to access the file
@@ -372,6 +421,14 @@
/// Whether the parsing is complete or not
bool _parsingComplete;
+
+private:
+
+ /// Return diff between timestamp of last and first audio frame
+ boost::uint64_t audioBufferLength() const;
+
+ /// Return diff between timestamp of last and first video frame
+ boost::uint64_t videoBufferLength() const;
};
Index: libmedia/ffmpeg/MediaParserFfmpeg.cpp
===================================================================
RCS file: /sources/gnash/gnash/libmedia/ffmpeg/MediaParserFfmpeg.cpp,v
retrieving revision 1.6
retrieving revision 1.7
diff -u -b -r1.6 -r1.7
--- libmedia/ffmpeg/MediaParserFfmpeg.cpp 4 Jun 2008 20:14:40 -0000
1.6
+++ libmedia/ffmpeg/MediaParserFfmpeg.cpp 6 Jun 2008 16:45:09 -0000
1.7
@@ -82,105 +82,6 @@
}
boost::uint32_t
-MediaParserFfmpeg::getBufferLength()
-{
- // TODO: figure wheter and why we should privilege
- // video frames over audio frames when both
- // are available
- // I belive the corrent behaviour here would
- // be using the smallest max-timestamp..
-
- if (_videoStream && ! _videoFrames.empty())
- {
- return _videoFrames.back()->timestamp;
- }
-
- if (_audioStream && ! _audioFrames.empty())
- {
- return _audioFrames.back()->timestamp;
- }
-
- return 0;
-}
-
-bool
-MediaParserFfmpeg::nextVideoFrameTimestamp(boost::uint64_t& ts)
-{
- // If there is no video in this stream return NULL
- if (!_videoStream) return false;
-
- // Make sure that there are parsed enough frames to return the need
frame
- while(_videoFrames.size() <= _nextVideoFrame && !_parsingComplete)
- {
- if (!parseNextFrame()) break;
- }
-
- // If the needed frame can't be parsed (EOF reached) return NULL
- if (_videoFrames.empty() || _videoFrames.size() <= _nextVideoFrame)
- {
- //gnash::log_debug("The needed frame (%d) can't be parsed (EOF
reached)", _lastVideoFrame);
- return false;
- }
-
- VideoFrameInfo* info = _videoFrames[_nextVideoFrame];
- ts = info->timestamp;
- return true;
-}
-
-std::auto_ptr<EncodedVideoFrame>
-MediaParserFfmpeg::nextVideoFrame()
-{
- std::auto_ptr<EncodedVideoFrame> ret;
- LOG_ONCE( log_unimpl("%s", __PRETTY_FUNCTION__) );
- return ret;
-}
-
-bool
-MediaParserFfmpeg::nextAudioFrameTimestamp(boost::uint64_t& ts)
-{
- // If there is no audio in this stream return NULL
- if (!_audioStream) return false;
-
- // Make sure that there are parsed enough frames to return the need
frame
- while(_audioFrames.size() <= _nextAudioFrame && !_parsingComplete)
- {
- if (!parseNextFrame()) break;
- }
-
- // If the needed frame can't be parsed (EOF reached) return NULL
- if (_audioFrames.empty() || _audioFrames.size() <= _nextAudioFrame)
- {
- //gnash::log_debug("The needed frame (%d) can't be parsed (EOF
reached)", _lastAudioFrame);
- return false;
- }
-
- AudioFrameInfo* info = _audioFrames[_nextAudioFrame];
- ts = info->timestamp;
- return true;
-}
-
-std::auto_ptr<EncodedAudioFrame>
-MediaParserFfmpeg::nextAudioFrame()
-{
- std::auto_ptr<EncodedAudioFrame> ret;
-
- LOG_ONCE( log_unimpl("%s", __PRETTY_FUNCTION__) );
- return ret;
-}
-
-VideoInfo*
-MediaParserFfmpeg::getVideoInfo()
-{
- return _videoInfo.get();
-}
-
-AudioInfo*
-MediaParserFfmpeg::getAudioInfo()
-{
- return _audioInfo.get();
-}
-
-boost::uint32_t
MediaParserFfmpeg::seek(boost::uint32_t pos)
{
log_debug("MediaParserFfmpeg::seek(%d) TESTING", pos);
@@ -234,26 +135,24 @@
//
boost::uint64_t timestamp = static_cast<boost::uint64_t>(packet.dts *
as_double(_videoStream->time_base) * 1000.0);
+ LOG_ONCE( log_unimpl("%s", __PRETTY_FUNCTION__) );
+ return false;
+
+#if 0
+
// flags, for keyframe
bool isKeyFrame = packet.flags&PKT_FLAG_KEY;
- // Frame offset in input
- boost::int64_t offset = packet.pos;
- if ( offset < 0 )
- {
- LOG_ONCE(log_debug("Unknown offset of video frame, should we
pretend we know ? or rely on ffmpeg seeking ? I guess the latter will do for a
start."));
- //return false;
- }
-
VideoFrameInfo* info = new VideoFrameInfo;
info->dataSize = packet.size;
info->isKeyFrame = isKeyFrame;
- info->dataPosition = offset;
+ info->dataPosition = pos;
info->timestamp = timestamp;
_videoFrames.push_back(info); // takes ownership
return true;
+#endif
}
bool
@@ -272,22 +171,18 @@
//
boost::uint64_t timestamp = static_cast<boost::uint64_t>(packet.dts *
as_double(_audioStream->time_base) * 1000.0);
- // Frame offset in input
- boost::int64_t offset = packet.pos;
- if ( offset < 0 )
- {
- LOG_ONCE(log_debug("Unknown offset of audio frame, should we
pretend we know ? or rely on ffmpeg seeking ? I guess the latter will do for a
start."));
- //return false;
- }
+ LOG_ONCE( log_unimpl("%s", __PRETTY_FUNCTION__) );
+ return false;
+#if 0
+ std::auto_ptr<EncodedAudioFrame> frame ( new EncodedAudioFrame );
- AudioFrameInfo* info = new AudioFrameInfo;
- info->dataSize = packet.size;
- info->dataPosition = offset > 0 ? (boost::uint64_t)offset : 0;
- info->timestamp = timestamp;
+ frame->dataSize = packet.size
+ frame->timestamp = timestamp;
- _audioFrames.push_back(info); // takes ownership
+ _audioFrames.push_back(frame.release()); // takes ownership
return true;
+#endif
}
bool
@@ -295,17 +190,22 @@
{
if ( _parsingComplete )
{
- //log_debug("MediaParserFfmpeg::parseNextFrame: parsing
complete, nothing to do");
+ log_debug("MediaParserFfmpeg::parseNextFrame: parsing complete,
nothing to do");
return false;
}
+ // position the stream where we left parsing as
+ // it could be somewhere else for reading a specific
+ // or seeking.
+ _stream->set_position(_lastParsedPosition);
+
assert(_formatCtx);
AVPacket packet;
- //log_debug("av_read_frame call");
+ log_debug("av_read_frame call");
int rc = av_read_frame(_formatCtx, &packet);
- //log_debug("av_read_frame returned %d", rc);
+ log_debug("av_read_frame returned %d", rc);
if ( rc < 0 )
{
log_error(_("MediaParserFfmpeg::parseNextChunk: Problems
parsing next frame"));
@@ -337,9 +237,21 @@
_parsingComplete=true;
}
+ // Update _lastParsedPosition
+ boost::uint64_t curPos = _stream->get_position();
+ if ( curPos > _lastParsedPosition )
+ {
+ _lastParsedPosition = curPos;
+ }
+ log_debug("parseNextFrame: parsed %d+%d/%d bytes (byteIOCxt: pos:%d,
buf_ptr:%p, buf_end:%p); "
+ " AVFormatContext: data_offset:%d, cur_ptr:%p,; "
+ "%d video frames, %d audio frames",
+ curPos, _formatCtx->cur_ptr-_formatCtx->cur_pkt.data,
_stream->get_size(),
+ _byteIOCxt.pos, (void*)_byteIOCxt.buf_ptr,
(void*)_byteIOCxt.buf_end,
+ _formatCtx->data_offset, (void*)_formatCtx->cur_ptr,
+ _videoFrames.size(), _audioFrames.size());
return ret;
-
}
bool
@@ -361,9 +273,7 @@
MediaParserFfmpeg::MediaParserFfmpeg(std::auto_ptr<tu_file> stream)
:
MediaParser(stream),
- _videoFrames(),
_nextVideoFrame(0),
- _audioFrames(),
_nextAudioFrame(0),
_inputFmt(0),
_formatCtx(0),
@@ -375,7 +285,7 @@
{
av_register_all(); // TODO: needs to be invoked only once ?
- ByteIOCxt.buffer = NULL;
+ _byteIOCxt.buffer = NULL;
_inputFmt = probeStream();
if ( ! _inputFmt )
@@ -388,20 +298,20 @@
// Setup the filereader/seeker mechanism. 7th argument (NULL) is the
writer function,
// which isn't needed.
_byteIOBuffer.reset( new unsigned char[byteIOBufferSize] );
- init_put_byte(&ByteIOCxt,
- _byteIOBuffer.get(),
- byteIOBufferSize, // ?
- 0, // ?
+ init_put_byte(&_byteIOCxt,
+ _byteIOBuffer.get(), // buffer
+ byteIOBufferSize, // buffer size
+ 0, // write flags
this, // opaque pointer to pass to the callbacks
MediaParserFfmpeg::readPacketWrapper, // packet reader callback
- NULL, // writer callback
+ NULL, // packet writer callback
MediaParserFfmpeg::seekMediaWrapper // seeker callback
);
- ByteIOCxt.is_streamed = 1;
+ _byteIOCxt.is_streamed = 1;
// Open the stream. the 4th argument is the filename, which we ignore.
- if(av_open_input_stream(&_formatCtx, &ByteIOCxt, "", _inputFmt, NULL) <
0)
+ if(av_open_input_stream(&_formatCtx, &_byteIOCxt, "", _inputFmt, NULL)
< 0)
{
throw GnashException("MediaParserFfmpeg couldn't open input
stream");
}
@@ -475,33 +385,19 @@
//av_free(_inputFmt); // it seems this one blows up, could be
due to av_free(_formatCtx) above
}
- for (VideoFrames::iterator i=_videoFrames.begin(),
- e=_videoFrames.end(); i!=e; ++i)
- {
- delete (*i);
- }
-
- for (AudioFrames::iterator i=_audioFrames.begin(),
- e=_audioFrames.end(); i!=e; ++i)
- {
- delete (*i);
- }
}
int
MediaParserFfmpeg::readPacket(boost::uint8_t* buf, int buf_size)
{
//GNASH_REPORT_FUNCTION;
+ log_debug("readPacket(%d)", buf_size);
assert( _stream.get() );
tu_file& in = *_stream;
size_t ret = in.read_bytes(static_cast<void*>(buf), buf_size);
- // Update _lastParsedPosition
- boost::uint64_t curPos = in.get_position();
- if ( curPos > _lastParsedPosition ) _lastParsedPosition = curPos;
-
return ret;
}
@@ -509,7 +405,7 @@
offset_t
MediaParserFfmpeg::seekMedia(offset_t offset, int whence)
{
- //GNASH_REPORT_FUNCTION;
+ GNASH_REPORT_FUNCTION;
assert(_stream.get());
tu_file& in = *(_stream);
Index: libmedia/ffmpeg/MediaParserFfmpeg.h
===================================================================
RCS file: /sources/gnash/gnash/libmedia/ffmpeg/MediaParserFfmpeg.h,v
retrieving revision 1.6
retrieving revision 1.7
diff -u -b -r1.6 -r1.7
--- libmedia/ffmpeg/MediaParserFfmpeg.h 4 Jun 2008 20:14:40 -0000 1.6
+++ libmedia/ffmpeg/MediaParserFfmpeg.h 6 Jun 2008 16:45:09 -0000 1.7
@@ -57,27 +57,6 @@
~MediaParserFfmpeg();
// See dox in MediaParser.h
- virtual boost::uint32_t getBufferLength();
-
- // See dox in MediaParser.h
- virtual bool nextVideoFrameTimestamp(boost::uint64_t& ts);
-
- // See dox in MediaParser.h
- virtual std::auto_ptr<EncodedVideoFrame> nextVideoFrame();
-
- // See dox in MediaParser.h
- virtual bool nextAudioFrameTimestamp(boost::uint64_t& ts);
-
- // See dox in MediaParser.h
- virtual std::auto_ptr<EncodedAudioFrame> nextAudioFrame();
-
- // See dox in MediaParser.h
- virtual VideoInfo* getVideoInfo();
-
- // See dox in MediaParser.h
- virtual AudioInfo* getAudioInfo();
-
- // See dox in MediaParser.h
virtual boost::uint32_t seek(boost::uint32_t);
// See dox in MediaParser.h
@@ -88,67 +67,6 @@
private:
- /// Information about an FFMPEG Video Frame
- class VideoFrameInfo
- {
- public:
-
- VideoFrameInfo()
- :
- isKeyFrame(false),
- dataSize(0),
- dataPosition(0),
- timestamp(0)
- {}
-
- /// Type of this frame
- bool isKeyFrame;
-
- /// Size of the frame in bytes (needed?)
- boost::uint32_t dataSize;
-
- /// Start of frame data in stream
- boost::uint64_t dataPosition;
-
- /// Timestamp in milliseconds
- boost::uint32_t timestamp;
-
- };
-
- /// Information about an FFMPEG Audio Frame
- class AudioFrameInfo
- {
- public:
-
- AudioFrameInfo()
- :
- dataSize(0),
- dataPosition(0),
- timestamp(0)
- {}
-
- /// Size of the frame in bytes (needed?)
- boost::uint32_t dataSize;
-
- /// Start of frame data in stream
- boost::uint64_t dataPosition;
-
- /// Timestamp in milliseconds
- boost::uint32_t timestamp;
-
- };
-
- // NOTE: VideoFrameInfo is a relatively small structure,
- // chances are keeping by value here would reduce
- // memory fragmentation with no big cost
- typedef std::vector<VideoFrameInfo*> VideoFrames;
-
- /// list of videoframes, does no contain the frame data.
- //
- /// Elements owned by this class.
- ///
- VideoFrames _videoFrames;
-
/// Video frame cursor position
//
/// This is the video frame number that will
@@ -156,19 +74,6 @@
///
size_t _nextVideoFrame;
- /// Info about the video stream (if any)
- std::auto_ptr<VideoInfo> _videoInfo;
-
- // NOTE: AudioFrameInfo is a relatively small structure,
- // chances are keeping by value here would reduce
- // memory fragmentation with no big cost
- typedef std::vector<AudioFrameInfo*> AudioFrames;
-
- /// list of audioframes, does no contain the frame data.
- //
- /// Elements owned by this class.
- AudioFrames _audioFrames;
-
/// Audio frame cursor position
//
/// This is the video frame number that will
@@ -176,33 +81,12 @@
///
size_t _nextAudioFrame;
- /// Info about the audio stream (if any)
- std::auto_ptr<AudioInfo> _audioInfo;
-
/// Parse next media frame
//
/// @return false on error or eof, true otherwise
///
bool parseNextFrame();
- /// Parse a video frame
- //
- /// Basically create a VideoFrameInfo out of the AVPacket and push
- /// it on the container.
- ///
- /// @return false on error
- ///
- bool parseVideoFrame(AVPacket& packet);
-
- /// Parse an audio frame
- //
- /// Basically create a AudioFrameInfo out of the AVPacket and push
- /// it on the container.
- ///
- /// @return false on error
- ///
- bool parseAudioFrame(AVPacket& packet);
-
/// Input chunk reader, to be called by ffmpeg parser
int readPacket(boost::uint8_t* buf, int buf_size);
@@ -236,14 +120,14 @@
AVStream* _audioStream;
/// ?
- ByteIOContext ByteIOCxt;
+ ByteIOContext _byteIOCxt;
/// Size of the ByteIO context buffer
//
/// This seems to be the size of chunks read
/// by av_read_frame.
///
- static const size_t byteIOBufferSize = 1024; // 500000;
+ static const size_t byteIOBufferSize = 1024;
boost::scoped_array<unsigned char> _byteIOBuffer;
@@ -256,6 +140,12 @@
///
boost::uint16_t SampleFormatToSampleSize(SampleFormat fmt);
+ /// Make an EncodedVideoFrame from an AVPacket and push to buffer
+ //
+ bool parseVideoFrame(AVPacket& packet);
+
+ /// Make an EncodedAudioFrame from an AVPacket and push to buffer
+ bool parseAudioFrame(AVPacket& packet);
};
Index: server/asobj/NetStream.cpp
===================================================================
RCS file: /sources/gnash/gnash/server/asobj/NetStream.cpp,v
retrieving revision 1.96
retrieving revision 1.97
diff -u -b -r1.96 -r1.97
--- server/asobj/NetStream.cpp 29 May 2008 09:22:00 -0000 1.96
+++ server/asobj/NetStream.cpp 6 Jun 2008 16:45:09 -0000 1.97
@@ -529,9 +529,6 @@
NetStream::bufferLength()
{
if (m_parser.get() == NULL) return 0;
-
- // m_parser will lock a mutex
- // FLVParser::getBufferLength returns milliseconds already
return m_parser->getBufferLength();
}
Index: server/asobj/NetStreamFfmpeg.cpp
===================================================================
RCS file: /sources/gnash/gnash/server/asobj/NetStreamFfmpeg.cpp,v
retrieving revision 1.145
retrieving revision 1.146
diff -u -b -r1.145 -r1.146
--- server/asobj/NetStreamFfmpeg.cpp 6 Jun 2008 10:19:59 -0000 1.145
+++ server/asobj/NetStreamFfmpeg.cpp 6 Jun 2008 16:45:09 -0000 1.146
@@ -517,7 +517,6 @@
raw->m_size);
#endif // GNASH_DEBUG_DECODING
- //raw->m_stream_index = m_audio_index; // no idea what this is needed
for
raw->m_ptr = raw->m_data; // no idea what this is needed for
raw->m_pts = frame->timestamp;
@@ -713,7 +712,7 @@
#ifdef GNASH_DEBUG_DECODING
// this one we might avoid :) -- a less intrusive logging could
// be take note about how many things we're pushing over
- log_debug("pushDecodedAudioFrames(%d) pushing frame with
timestamp %d", ts, nextTimestamp);
+ log_debug("pushDecodedAudioFrames(%d) pushing %dth frame with
timestamp %d", ts, _audioQueue.size()+1, nextTimestamp);
#endif
_audioQueue.push_back(audio);
}
@@ -729,7 +728,11 @@
assert ( m_parser.get() );
// nothing to do if we don't have a video decoder
- if ( ! _videoDecoder.get() ) return;
+ if ( ! _videoDecoder.get() )
+ {
+ log_debug("refreshVideoFrame: no video decoder, nothing to do");
+ return;
+ }
#ifdef GNASH_DEBUG_DECODING
// bufferLength() would lock the mutex (which we already hold),
@@ -947,26 +950,6 @@
}
long
-NetStreamFfmpeg::bufferLength ()
-{
-#ifdef LOAD_MEDIA_IN_A_SEPARATE_THREAD
- boost::mutex::scoped_lock lock(_parserMutex);
-#endif // LOAD_MEDIA_IN_A_SEPARATE_THREAD
-
- if ( ! m_parser.get() )
- {
- log_debug("bytesTotal: no parser, no party");
- return 0;
- }
-
- boost::uint32_t maxTimeInBuffer = m_parser->getBufferLength();
- boost::uint64_t curPos = _playHead.getPosition();
-
- if ( maxTimeInBuffer < curPos ) return 0;
- return maxTimeInBuffer-curPos;
-}
-
-long
NetStreamFfmpeg::bytesTotal ()
{
#ifdef LOAD_MEDIA_IN_A_SEPARATE_THREAD
Index: server/asobj/NetStreamFfmpeg.h
===================================================================
RCS file: /sources/gnash/gnash/server/asobj/NetStreamFfmpeg.h,v
retrieving revision 1.73
retrieving revision 1.74
diff -u -b -r1.73 -r1.74
--- server/asobj/NetStreamFfmpeg.h 3 Jun 2008 16:21:39 -0000 1.73
+++ server/asobj/NetStreamFfmpeg.h 6 Jun 2008 16:45:09 -0000 1.74
@@ -112,7 +112,6 @@
long bytesTotal();
- long bufferLength();
private:
enum PlaybackState {
Index: libmedia/MediaParser.cpp
===================================================================
RCS file: libmedia/MediaParser.cpp
diff -N libmedia/MediaParser.cpp
--- /dev/null 1 Jan 1970 00:00:00 -0000
+++ libmedia/MediaParser.cpp 6 Jun 2008 16:45:08 -0000 1.1
@@ -0,0 +1,141 @@
+// MediaParser.cpp: Media file parser, for Gnash.
+//
+// Copyright (C) 2007, 2008 Free Software Foundation, Inc.
+//
+// This program is free software; you can redistribute it and/or modify
+// it under the terms of the GNU General Public License as published by
+// the Free Software Foundation; either version 3 of the License, or
+// (at your option) any later version.
+//
+// This program is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+// GNU General Public License for more details.
+//
+// You should have received a copy of the GNU General Public License
+// along with this program; if not, write to the Free Software
+// Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+//
+
+
+#include "MediaParser.h"
+#include "log.h"
+
+namespace gnash {
+namespace media {
+
+boost::uint64_t
+MediaParser::getBufferLength() const
+{
+ bool hasVideo = _videoInfo.get();
+ bool hasAudio = _audioInfo.get();
+
+ //log_debug("MediaParser::getBufferLength: %d video %d audio frames",
_videoFrames.size(), _audioFrames.size());
+
+ if ( hasVideo && hasAudio )
+ {
+ return std::min(audioBufferLength(), videoBufferLength());
+ }
+ else if ( hasVideo )
+ {
+ return videoBufferLength();
+ }
+ else if ( hasAudio )
+ {
+ return audioBufferLength();
+ }
+ else return 0;
+}
+
+boost::uint64_t
+MediaParser::videoBufferLength() const
+{
+ if (_videoFrames.empty()) return 0;
+ //log_debug("videoBufferLength: first video frame has timestamp %d",
_videoFrames.front()->timestamp());
+ return _videoFrames.back()->timestamp() -
_videoFrames.front()->timestamp();
+}
+
+boost::uint64_t
+MediaParser::audioBufferLength() const
+{
+ if (_audioFrames.empty()) return 0;
+ //log_debug("audioBufferLength: first audio frame has timestamp %d",
_audioFrames.front()->timestamp);
+ return _audioFrames.back()->timestamp -
_audioFrames.front()->timestamp;
+}
+
+const EncodedVideoFrame*
+MediaParser::peekNextVideoFrame() const
+{
+ if (_videoFrames.empty())
+ {
+ log_debug("MediaParser::peekNextVideoFrame: no more video
frames here...");
+ return 0;
+ }
+ return _videoFrames.front();
+}
+
+bool
+MediaParser::nextVideoFrameTimestamp(boost::uint64_t& ts) const
+{
+ const EncodedVideoFrame* ef = peekNextVideoFrame();
+ if ( ! ef ) return false;
+ ts = ef->timestamp();
+ return true;
+}
+
+std::auto_ptr<EncodedVideoFrame>
+MediaParser::nextVideoFrame()
+{
+ std::auto_ptr<EncodedVideoFrame> ret;
+ if (_videoFrames.empty()) return ret;
+ ret.reset(_videoFrames.front());
+ _videoFrames.pop_front();
+ return ret;
+}
+
+std::auto_ptr<EncodedAudioFrame>
+MediaParser::nextAudioFrame()
+{
+ std::auto_ptr<EncodedAudioFrame> ret;
+ if (_audioFrames.empty()) return ret;
+ ret.reset(_audioFrames.front());
+ _audioFrames.pop_front();
+ return ret;
+}
+
+bool
+MediaParser::nextAudioFrameTimestamp(boost::uint64_t& ts) const
+{
+ const EncodedAudioFrame* ef = peekNextAudioFrame();
+ if ( ! ef ) return false;
+ ts = ef->timestamp;
+ return true;
+}
+
+const EncodedAudioFrame*
+MediaParser::peekNextAudioFrame() const
+{
+ if (!_audioInfo.get() || _audioFrames.empty()) return 0;
+ return _audioFrames.front();
+}
+
+MediaParser::~MediaParser()
+{
+ for (VideoFrames::iterator i=_videoFrames.begin(),
+ e=_videoFrames.end(); i!=e; ++i)
+ {
+ delete (*i);
+ }
+
+ for (AudioFrames::iterator i=_audioFrames.begin(),
+ e=_audioFrames.end(); i!=e; ++i)
+ {
+ delete (*i);
+ }
+}
+
+} // end of gnash::media namespace
+} // end of gnash namespace
+
+#undef PADDING_BYTES
+#undef READ_CHUNKS