以下只作为个人总结,只作记录用,如果想系统的分析live555,建议阅读以下帖子,或阅读源码:
http://blog.csdn.net/niu_gao/article/details/6906055一、概念live555类似于Gstreamer和DirectShow架构,分Source、Filter、Sink的概念,例如,测试程序testOnDemandRTSPServer中,流化H264的pipeline如下(通过H264VideoFileServerMediaSubsession自动构建):
【Source】ByteStreamFileSource->H264or5VideoStreamParser(MPEGVideoStreamParser)->H264VideoStreamFramer(H264or5VideoStreamFramer(MPEGVideoStreamFramer))
直接与Sink打交道的是H264VideoStreamFramer(通过H264VideoFileServerMediaSubsession的createNewStreamSource),其它Parser、FileSource是H264VideoStreamFramer自动构建(按需要)
【Sink】H264or5Fragmenter->H264VideoRTPSink(H264or5VideoRTPSink)->VideoRTPSink->MultiFramedRTPSink
H264or5Fragmenter与上面的H264or5VideoStreamFramer打交道,获取Source的Frame数据后分段成RTP包输出。
二、数据流动1、先来看数据的输入
1)首先,Sink下会创建缓冲,用于存放从Source获取的数据,存放缓冲的指针就是大家比较熟悉的fTo,Sink通过一系列的类,把指针传递下去,具体是:
H264or5Fragmenter->MPEGVideoStreamFramer->MPEGVideoStreamParser
最终从Sink传递到Parser中,相关的代码片段是:
H264or5Fragmenter::H264or5Fragmenter
{
fInputBuffer = new unsignedchar[fInputBufferSize];
}
void H264or5Fragmenter::doGetNextFrame()
{
fInputSource->getNextFrame(&fInputBuffer[1], fInputBufferSize - 1,
afterGettingFrame, this,
FramedSource::handleClosure, this);
}
MPEGVideoStreamFramer::doGetNextFrame()
{
fParser->registerReadInterest(fTo, fMaxSize);
continueReadProcessing();
}
void MPEGVideoStreamParser::registerReadInterest(unsigned char* to,
unsigned maxSize) {
fStartOfFrame = fTo = fSavedTo = to;
fLimit = to + maxSize;
fNumTruncatedBytes = fSavedNumTruncatedBytes = 0;
}
2)或许你注意到,fTo还没传递到最终的Source(ByteStreamFileSource),那是因为ByteStreamFileSource是由Parser来访问的,而Parser本身会建立Buffer用于存储从ByteStreamFileSource读取的数据,再把分析出来的NAL写入fTo(来自Sink),所以你就可以理解为什么fTo只到达了Parser,而不到达ByteStreamFileSource了吧。
相关代码如下:
StreamParser::StreamParser
{
fBank[0] = new unsigned char[BANK_SIZE];
fBank[1] = new unsigned char[BANK_SIZE];
}
StreamParser::ensureValidBytes1
{
unsigned maxNumBytesToRead = BANK_SIZE - fTotNumValidBytes;
fInputSource->getNextFrame(&curBank()[fTotNumValidBytes],
maxNumBytesToRead,
afterGettingBytes, this,
onInputClosure, this);
}
unsigned H264or5VideoStreamParser::parse()
{
saveXBytes(Y);
}
class MPEGVideoStreamParser: public StreamParser
{
// Record "byte" in the current output frame:
void saveByte(u_int8_t byte) {
if (fTo >= fLimit) { // there's no space left
++fNumTruncatedBytes;
return;
}
*fTo++ = byte;
}
}
2、再来看数据的输出
1)上面输入的数据最终去到H264or5Fragmenter,这里说明一下:
H264or5Fragmenter还是FrameSource(在H264or5VideoRTSPSink.cpp内定义),用于连接H264VideoRTSPSink和H264VideoStreamFramer;H264or5Fragmenter的doGetNextFrame实现,会把从Source获取到的数据,按照RTP协议的要求进行分段,保存到Sink的fOutBuf中。
具体代码如下:
MultiFramedRTPSink::MultiFramedRTPSink(UsageEnvironment& env,
Groupsock* rtpGS,
unsigned char rtpPayloadType,
unsigned rtpTimestampFrequency,
char const* rtpPayloadFormatName,
unsigned numChannels)
: RTPSink(env, rtpGS, rtpPayloadType, rtpTimestampFrequency,
rtpPayloadFormatName, numChannels),
fOutBuf(NULL), fCurFragmentationOffset(0), fPreviousFrameEndedFragmentation(False),
fOnSendErrorFunc(NULL), fOnSendErrorData(NULL) {
setPacketSizes((RTP_PAYLOAD_PREFERRED_SIZE), (RTP_PAYLOAD_MAX_SIZE)); //sihid
}
void MultiFramedRTPSink::setPacketSizes(unsigned preferredPacketSize,
unsigned maxPacketSize) {
if (preferredPacketSize > maxPacketSize || preferredPacketSize == 0) return;
// sanity check
delete fOutBuf;
fOutBuf = new OutPacketBuffer(preferredPacketSize, maxPacketSize);
fOurMaxPacketSize = maxPacketSize; // save value, in case subclasses need it
}
Boolean MultiFramedRTPSink::continuePlaying() {
// Send the first packet.
// (This will also schedule any future sends.)
buildAndSendPacket(True);
return True;
}
void MultiFramedRTPSink::buildAndSendPacket(Boolean isFirstPacket) {
..
packFrame();
}
void MultiFramedRTPSink::packFrame() {
// Get the next frame.
..
// Normal case: we need to read a new frame from the source
if (fSource == NULL) return;
fSource->getNextFrame(fOutBuf->curPtr(), fOutBuf->totalBytesAvailable(),
afterGettingFrame, this, ourHandleClosure, this);
}
void H264or5Fragmenter::doGetNextFrame() {
if (fNumValidDataBytes == 1) {
// We have no NAL unit data currently in the buffer. Read a new one:
fInputSource->getNextFrame(&fInputBuffer[1], fInputBufferSize - 1, //Sink调用Source的getNextFrame获取数据
afterGettingFrame, this,
FramedSource::handleClosure, this);
} else {
..
memmove(fTo, &fInputBuffer[1], fNumValidDataBytes - 1);
..
memmove(fTo, fInputBuffer, fMaxSize);
..
memmove(fTo, &fInputBuffer[fCurDataOffset-numExtraHeaderBytes], numBytesToSend);
..
}
看到了吧,数据的输入操作,其实是由Sink(MultiFramedRTPSink)发起的,当Sink需要获取数据时,通过调用Source的getNextFrame操作(具体由Source的doGetNextFrame操作来实现),经过一系列的类操作(Source->Filter->Sink),获取到Sink想要的数据。
2)到目前为止,终于可以构建出完整的Pipeline了:ByteStreamFileSource->H264or5VideoStreamParser(MPEGVideoStreamParser)->H264VideoStreamFramer(H264or5VideoStreamFramer)->H264or5Fragmenter->H264VideoRTPSink(H264or5VideoRTPSink)->VideoRTPSink->MultiFramedRTPSink
三、设计思想1、对于Buffer
上面的Pipeline中,有几处Buffer,对于实时性要求比较高的应用,有必要理清buffer的数量和控制buffer的大小
1)StreamParser会产生buffer,大小是BANK_SIZE(150000),那是因为StreamParser的前端是无格式的ByteStream,后面是完整的一帧数据(NAL),需要通过Parser来处理;
2)H264or5Fragmenter会产生buffer,用于StreamParser存放分析之后的数据(NAL),并生成后端RTPSink需要的RTP包;
3)MultiFramedRTPSink会产生buffer,用于Fragmenter存放分段之后的数据(RTP),以供RTSP服务器使用;
2、对于fTo
fTo,顾名思义,就是To的buffer指针,也就是后端提供的buffer指针;那fTo是在什么时候赋值的呢,答案在这里:
void FramedSource::getNextFrame(unsigned char* to, unsigned maxSize,
afterGettingFunc* afterGettingFunc,
void* afterGettingClientData,
onCloseFunc* onCloseFunc,
void* onCloseClientData) {
// Make sure we're not already being read:
if (fIsCurrentlyAwaitingData) {
envir() << "FramedSource[" << this << "]::getNextFrame(): attempting to read more than once at the same time!\n";
envir().internalError();
}
fTo = to; //把Sink的bufer赋给Source的fTo sihid
fMaxSize = maxSize; //设置FrameSource的MaxSize sihid
fNumTruncatedBytes = 0; // by default; could be changed by doGetNextFrame()
fDurationInMicroseconds = 0; // by default; could be changed by doGetNextFrame()
fAfterGettingFunc = afterGettingFunc;
fAfterGettingClientData = afterGettingClientData;
fOnCloseFunc = onCloseFunc;
fOnCloseClientData = onCloseClientData;
fIsCurrentlyAwaitingData = True;
doGetNextFrame();
}
另外,MPEGVideoStreamFramer还会通过其它方式传递fTo给MPEGVideoStreamParser:
MPEGVideoStreamFramer::doGetNextFrame()
{
fParser->registerReadInterest(fTo, fMaxSize);
continueReadProcessing();
}
void MPEGVideoStreamParser::registerReadInterest(unsigned char* to,unsigned maxSize) {
fStartOfFrame = fTo = fSavedTo = to;
fLimit = to + maxSize;
fNumTruncatedBytes = fSavedNumTruncatedBytes = 0;
}
原因是MPEGVideoStreamParser(StreamParser)不是FrameSource,所以只能提供另外的API来获取fTo
当Sink需要数据时,会调用Source的getNextFrame方法,同时把自身的buffer通过参数(to)传递给Source,保存在Source的fTo中。
3、对于fSource
Boolean MediaSink::startPlaying(MediaSource& source,
afterPlayingFunc* afterFunc,
void* afterClientData) {
// Make sure we're not already being played:
if (fSource != NULL) {
envir().setResultMsg("This sink is already being played");
return False;
}
// Make sure our source is compatible:
if (!sourceIsCompatibleWithUs(source)) {
envir().setResultMsg("MediaSink::startPlaying(): source is not compatible!");
return False;
}
fSource = (FramedSource*)&source;
fAfterFunc = afterFunc;
fAfterClientData = afterClientData;
return continuePlaying();
}
Sink的fSource,在startPlaying中被设置为createNewStreamSource返回的H264VideoStreamFramer。
随后在continuePlaying(H264or5VideoRTPSink实现)方法中被修改为H264or5Fragmenter,同时通过reassignInputSource函数记录H264VideoStreamFramer到H264or5Fragmenter的fInputSource成员,这样Sink和H264VideoStreamFramer之间隔着H264or5Fragmenter。
Boolean H264or5VideoRTPSink::continuePlaying() {
// First, check whether we have a 'fragmenter' class set up yet.
// If not, create it now:
if (fOurFragmenter == NULL) {
fOurFragmenter = new H264or5Fragmenter(fHNumber, envir(), fSource, OutPacketBuffer::maxSize, ourMaxPacketSize() - 12/*RTP hdr size*/);
} else {
fOurFragmenter->reassignInputSource(fSource);
}
fSource = fOurFragmenter;
// Then call the parent class's implementation: return MultiFramedRTPSink::continuePlaying();
}
class FramedFilter: public FramedSource {
public:
FramedSource* inputSource() const { return fInputSource; }
void reassignInputSource(FramedSource* newInputSource) { fInputSource = newInputSource; }
// Call before destruction if you want to prevent the destructor from closing the input source
void detachInputSource();
protected:
FramedFilter(UsageEnvironment& env, FramedSource* inputSource);
// abstract base class
virtual ~FramedFilter();
protected:
// Redefined virtual functions (with default 'null' implementations):
virtual char const* MIMEtype() const;
virtual void getAttributes() const;
virtual void doStopGettingFrames();
protected:
FramedSource* fInputSource; //输入文件对应的Source
};
为什么要这么做?因为RTPSink需要的数据,需要在H264VideoStreamFramer输出数据的基础上,通过H264or5Fragmenter封装成RTP包,所以多出了H264or5Fragmenter这个东西(类似于前面的H264or5VideoStreamParser)。
4、对于fInputSource
class FramedFilter下定义,被H264or5VideoStreamFramer和H264or5Fragmenter所继承。可是,fInputSource在这两个类下,指向的内容是不同的。
1)H264or5VideoStreamFramer下指向ByteStreamFileSource,具体见以下代码:
FramedSource* H264VideoFileServerMediaSubsession::createNewStreamSource(unsigned /*clientSessionId*/, unsigned& estBitrate) {
estBitrate = 500; // kbps, estimate
// Create the video source:
ByteStreamFileSource* fileSource = ByteStreamFileSource::createNew(envir(), fFileName);
if (fileSource == NULL) return NULL;
fFileSize = fileSource->fileSize();
// Create a framer for the Video Elementary Stream:
return H264VideoStreamFramer::createNew(envir(), fileSource); //把ByteStreamFileSource记录到H264VideoStreamFramer的fInputSource
}
H264or5VideoStreamFramer
::H264or5VideoStreamFramer(int hNumber, UsageEnvironment& env, FramedSource* inputSource,
Boolean createParser, Boolean includeStartCodeInOutput)
: MPEGVideoStreamFramer(env, inputSource),
fHNumber(hNumber),
fLastSeenVPS(NULL), fLastSeenVPSSize(0),
fLastSeenSPS(NULL), fLastSeenSPSSize(0),
fLastSeenPPS(NULL), fLastSeenPPSSize(0) {
.
} MPEGVideoStreamFramer::MPEGVideoStreamFramer(UsageEnvironment& env,
FramedSource* inputSource)
: FramedFilter(env, inputSource),
fFrameRate(0.0) /* until we learn otherwise */,
fParser(NULL) {
reset();
}
2)H264or5Fragmenter下指向H264VideoStreamFramer,具体见以下代码:
Boolean H264or5VideoRTPSink::continuePlaying() {
// First, check whether we have a 'fragmenter' class set up yet.
// If not, create it now:
if (fOurFragmenter == NULL) {
fOurFragmenter = new H264or5Fragmenter(fHNumber, envir(), fSource, OutPacketBuffer::maxSize, //OutPacketBuffer::maxSize决定了fInputBufferSize. sihid
ourMaxPacketSize() - 12/*RTP hdr size*/);
} else {
fOurFragmenter->reassignInputSource(fSource); //把fSource(对应H264VideoStreamFramer)保存到fOurFragmenter(对应H264or5Fragmenter)的fInputSource
}
fSource = fOurFragmenter;
// Then call the parent class's implementation:
return MultiFramedRTPSink::continuePlaying();
}
posted on 2017-01-20 12:15
lfc 阅读(2319)
评论(0) 编辑 收藏 引用