Live555学习之(七)---------- Live555实现h264视频的点播

上一篇介绍了Live555如何实现录像功能,我录的是H264编码的视频文件。在《Live555的基本介绍》这一篇中,我介绍说把mp3文件放到live/mediaServer目录下,然后使用Live555流化,就可以通过vlc去点播该文件。那么我们录好的h264文件能否被Live555流化然后使用VLC点播呢?经试验,发现是不行的。

然后我就对比VLC去请求Live555流化mp3文件和h264文件的过程,发现了原因:在请求h264文件时返回的SDP信息中,总出现"a=range:npt=0-",而请求mp3文件时返回的SDP信息中,npt后面是0到一个具体的数字,即指定了该ServerMediaSubsession的时间长度。在ServerMediaSession::generateSDPDescription函数中,我们找到与此相关的内容:

复制代码
1 // Unless subsessions have differing durations, we also have a “a=range:” line:
2 float dur = duration();
3 if (dur == 0.0) {
4 rangeLine = strDup(“a=range:npt=0-\r\n”);
5 } else if (dur > 0.0) {
6 char buf[100];
7 sprintf(buf, “a=range:npt=0-%.3f\r\n”, dur);
8 rangeLine = strDup(buf);
9 } else { // subsessions have differing durations, so “a=range:” lines go there
10 rangeLine = strDup("");
11 }
复制代码
  也就是说,duration函数返回值为0,才会出现"a=range:npt=0-",表示此ServerMediaSession的持续时间未知(不是表示此ServerMediaSession的持续时间为0)。

复制代码
1 float ServerMediaSession::duration() const {
2 float minSubsessionDuration = 0.0;
3 float maxSubsessionDuration = 0.0;
4 for (ServerMediaSubsession* subsession = fSubsessionsHead; subsession != NULL;
5 subsession = subsession->fNext) {
6 // Hack: If any subsession supports seeking by ‘absolute’ time, then return a negative value, to indicate that only subsessions
7 // will have a “a=range:” attribute:
8 char* absStartTime = NULL; char* absEndTime = NULL;
9 subsession->getAbsoluteTimeRange(absStartTime, absEndTime);
10 if (absStartTime != NULL) return -1.0f;
11
12 float ssduration = subsession->duration();
13 if (subsession == fSubsessionsHead) { // this is the first subsession
14 minSubsessionDuration = maxSubsessionDuration = ssduration;
15 } else if (ssduration < minSubsessionDuration) {
16 minSubsessionDuration = ssduration;
17 } else if (ssduration > maxSubsessionDuration) {
18 maxSubsessionDuration = ssduration;
19 }
20 }
21
22 if (maxSubsessionDuration != minSubsessionDuration) {
23 return -maxSubsessionDuration; // because subsession durations differ
24 } else {
25 return maxSubsessionDuration; // all subsession durations are the same
26 }
27 }
复制代码
  看一下ServerMediaSession::duration函数,发现ServerMediaSession的duration取决于各个ServerMediaSubsession的duration。那我们再看一下ServerMediaSubsession的duration函数:

1 float ServerMediaSubsession::duration() const {
2 // default implementation: assume an unbounded session:
3 return 0.0;
4 }
  可以看到默认的实现是返回0,而对于H264视频文件对应的ServerMediaSubsession具体类是H264VideoFileServerMediaSubsession,在该类中没有找到覆盖duration的实现,因此对于H264视频文件duration函数返回0。而对于mp3文件,我们看一下MP3AudioFileServerMediaSubsession类中关于duration的实现:

1 float MP3AudioFileServerMediaSubsession::duration() const {
2 return fFileDuration;       //返回的是fFileDuration,这个值由MP3FileSource得到
3 }
  后来发现对于mkv文件,Live555也是支持点播的,那么我又去看了一下MatroskaFileServerMediaSubsession类关于duration的实现:

1 float MatroskaFileServerMediaSubsession::duration() const { return fOurDemux.fileDuration(); }
  总之,对于mp3文件和mkv文件,druation函数都有具体的实现,而对于h264文件,使用的是默认的实现(返回0)。后来在官网找到这么一段说明:

这段话列出了Live555支持几种播放动作(包括暂停、点播、快进、倒放)的文件类型,Seeking即点播,从中我们可以看到支持的文件类型没有h264。但是后面又提示说了如果要使这些播放动作可以用于MPEG Transport Stream file(即ts流文件,后缀名是.ts),则必须有一个index file,还提示说可以使用MPEG2TransportStreamIndexer来产生index file。

然后我在testOnDemandRTSPServer.cpp中找到了对于.ts文件的处理

复制代码
1 // A MPEG-2 Transport Stream:
2 {
3 char const* streamName = “mpeg2TransportStreamTest”;
4 char const* inputFileName = “test.ts”;
5 char const* indexFileName = “test.tsx”;
6 ServerMediaSession* sms
7 = ServerMediaSession::createNew(*env, streamName, streamName,
8 descriptionString);
9 sms->addSubsession(MPEG2TransportFileServerMediaSubsession
10 ::createNew(*env, inputFileName, indexFileName, reuseFirstSource));
11 rtspServer->addServerMediaSession(sms);
12
13 announceStream(rtspServer, sms, streamName, inputFileName);
14 }
复制代码
  从中可以看到,对于.ts文件的流化,Live555还需要一个.tsx文件,这个就是对应于.ts文件的index file了。然后我在live/testProgs目录下找到了MPEG2TransportStreamIndexer.cpp

复制代码
1 int main(int argc, char const** argv) {
2 // Begin by setting up our usage environment:
3 TaskScheduler* scheduler = BasicTaskScheduler::createNew();
4 env = BasicUsageEnvironment::createNew(scheduler);
5
6
7 // Parse the command line:
8 programName = argv[0];
9 //if (argc != 2) usage();
10
11 char const
inputFileName = “test.ts”;
12 // Check whether the input file name ends with “.ts”:
13 int len = strlen(inputFileName);
14 if (len < 4 || strcmp(&inputFileName[len-3], “.ts”) != 0) {
15 env << “ERROR: input file name “” << inputFileName
16 << “” does not end with “.ts”\n”;
17 usage();
18 }
19
20 // Open the input file (as a ‘byte stream file source’):
21 FramedSource
input
22 = ByteStreamFileSource::createNew(env, inputFileName, TRANSPORT_PACKET_SIZE);
23 if (input == NULL) {
24 env << “Failed to open input file “” << inputFileName << “” (does it exist?)\n”;
25 exit(1);
26 }
27
28 // Create a filter that indexes the input Transport Stream data:
29 FramedSource
indexer
30 = MPEG2IFrameIndexFromTransportStream::createNew(env, input);
31
32 // The output file name is the same as the input file name, except with suffix “.tsx”:
33 char
outputFileName = new char[len+2]; // allow for trailing x\0
34 sprintf(outputFileName, “%sx”, inputFileName);
35
36 // Open the output file (for writing), as a ‘file sink’:
37 MediaSink
output = FileSink::createNew(*env, outputFileName);
38 if (output == NULL) {
39 *env << “Failed to open output file “” << outputFileName << “”\n”;
40 exit(1);
41 }
42
43
44
45 // Start playing, to generate the output index file:
46 *env << “Writing index file “” << outputFileName << “”…”;
47 output->startPlaying(indexer, afterPlaying, NULL);
48
49 env->taskScheduler().doEventLoop(); // does not return
50
51 return 0; // only to prevent compiler warning
52 }
53
54 void afterPlaying(void
/clientData/) {
55 *env << “…done\n”;
56 exit(0);
57 }
复制代码
  这个程序演示了如何根据一个.ts文件产生对应的.tsx文件,只要同时具有.ts文件和.tsx文件就可以实现点播了。那现在如何把h264文件转成.ts文件呢?然后我又在live/testProgs目录下惊喜发现了testH264VideoToTransportStream.cpp文件,来看看

复制代码
1 int main(int argc, char** argv) {
2 // Begin by setting up our usage environment:
3 TaskScheduler* scheduler = BasicTaskScheduler::createNew();
4 env = BasicUsageEnvironment::createNew(scheduler);
5
6 // Open the input file as a ‘byte-stream file source’:
7 FramedSource
inputSource = ByteStreamFileSource::createNew(*env, inputFileName);
8 if (inputSource == NULL) {
9 env << “Unable to open file “” << inputFileName
10 << “” as a byte-stream file source\n”;
11 exit(1);
12 }
13
14 // Create a ‘framer’ filter for this file source, to generate presentation times for each NAL unit:
15 H264VideoStreamFramer
framer = H264VideoStreamFramer::createNew(env, inputSource, True/includeStartCodeInOutput/);
16
17 // Then create a filter that packs the H.264 video data into a Transport Stream:
18 MPEG2TransportStreamFromESSource
tsFrames = MPEG2TransportStreamFromESSource::createNew(env);
19 tsFrames->addNewVideoSource(framer, 5/mpegVersion: H.264/);
20
21 // Open the output file as a ‘file sink’:
22 MediaSink
outputSink = FileSink::createNew(*env, outputFileName);
23 if (outputSink == NULL) {
24 *env << “Unable to open file “” << outputFileName << “” as a file sink\n”;
25 exit(1);
26 }
27
28 // Finally, start playing:
29 *env << “Beginning to read…\n”;
30 outputSink->startPlaying(tsFrames, afterPlaying, NULL);
31
32 env->taskScheduler().doEventLoop(); // does not return
33
34 return 0; // only to prevent compiler warning
35 }
36
37 void afterPlaying(void
/clientData/) {
38 *env << “Done reading.\n”;
39 *env << “Wrote output file: “” << outputFileName << “”\n”;
40 exit(0);
41 }
复制代码
  这个程序又演示了如何将一个h264视频文件转成.ts格式的文件,这样把上面的两个例子程序结合起来,就可以实现H264文件的点播了。

在此,顺便说一下自己对Live555多线程编程的理解,Live555是基于事件驱动的单线程模式,每个TaskScheduler就对应一个Live555线程,那么在我们的程序中可以创建多个TaskScheduler来实现多线程。我们自己程序的其他线程和Live555线程的交互可以通过全局的flag变量或者调用triggerEvent函数。我们来看看Live555官方对于此问题的解答:

猜你喜欢

转载自blog.csdn.net/qq_43716137/article/details/108638676