Use live555 to live stream camera data from v4l2

    Use live555 to live broadcast the camera data from v4l2. In my project, the basic idea is: use V4L2 to collect camera data, then use the x264 library to encode the camera data into H264 data format, and then write the data to the named pipe middle. Finally, use live555 to read the data from the pipeline and send it out to realize the live broadcast of the video.

    In the process of debugging my project, I used the Logitech C270 camera, which came out in the YUYV data format. x264 and live555 use the latest libraries. The project directory is as follows:

├── H264FramedLiveSource.cpp
├── H264VideoStreamer.cpp
├── include
│   ├── encoder
│   │   ├── encoder_define.hh
│   │   ├── H264FramedLiveSource.hh
│   │   └── stdint.h
│   ├── live555
│   │   ├── basicUsageEnvironment
│   │   ├── groupsock
│   │   ├── liveMedia
│   │   └── usageEnvironment
│   └── x264
│       ├── x264_config.h
│       └── x264.h
├── lib
│   ├── livelib
│   │   ├── libBasicUsageEnvironment.a
│   │   ├── libgroupsock.a
├── │ ├── libliveMedia.a
│   │   └── libUsageEnvironment.a
│   └── x264lib
│       ├── libx264.a
│       └── libx264.so.148
└── Makefile
    A new Device class is created in H264FramedLiveSource.cpp to implement v4l2 data acquisition and x264 encoding. Its function is defined as follows:

/*=============================================================================
 * #     FileName: H264FramedLiveSource.hh
 * #         Desc:
 * #               
 * #       Author: licaibiao
 * #      Version:
 * #   LastChange: 2017-02-24
 * =============================================================================*/
#ifndef _H264FRAMEDLIVESOURCE_HH
#define _H264FRAMEDLIVESOURCE_HH
#include <FramedSource.hh>
#include <UsageEnvironment.hh>
#include "encoder_define.hh"

class Device
{
public:
    void init_mmap(void);
    void init_camera(void);
    void init_encoder(void);
    void open_camera(void);  
    void close_camera(void);
    void read_one_frame(void);
    void getnextframe(void);
    void start_capture(void);
    void stop_capture(void);
    void close_encoder();
    int  camera_able_read(void);
    void compress_begin(Encoder *en, int width, int height);
    int  compress_frame(Encoder *en, int type, char *in, int len, char *out);
    void compress_end(Encoder *en);
    void Init();
    void intoloop();;
    void Destory();
public:
    int fd;
	FILE *save_fd;
    int n_nal;
    int frame_len;
	char *h264_buf;
	unsigned int n_buffer;
	encoder and;
    FILE *h264_fp;
    BUFTYPE *usr_buf;
	FILE *pipe_fd;
};
#endif
    H264VideoStreamer.cpp realizes the creation of RTSP. Its code implementation is as follows:

/**********
This library is free software; you can redistribute it and/or modify it under
the terms of the GNU Lesser General Public License as published by the
Free Software Foundation; either version 3 of the License, or (at your
option) any later version. (See <http://www.gnu.org/copyleft/lesser.html>.)

This library is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE.  See the GNU Lesser General Public License for
more details.

You should have received a copy of the GNU Lesser General Public License
along with this library; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
**********/
// Copyright (c) 1996-2017, Live Networks, Inc.  All rights reserved
// A test program that reads a H.264 Elementary Stream video file
// and streams it using RTP
// main program
//
// NOTE: For this application to work, the H.264 Elementary Stream video file *must* contain SPS and PPS NAL units,
// ideally at or near the start of the file.  These SPS and PPS NAL units are used to specify 'configuration' information
// that is set in the output stream's SDP description (by the RTSP server that is built in to this application).
// Note also that - unlike some other "*Streamer" demo applications - the resulting stream can be received only using a
// RTSP client (such as "openRTSP")

#include <liveMedia.hh>
#include <BasicUsageEnvironment.hh>
#include <GroupsockHelper.hh>
#include <H264FramedLiveSource.hh>
#include <sys/types.h>  
#include <sys/stat.h>

UsageEnvironment* approx;
char const* inputFileName = "/tmp/fifo";
char * ptr;
H264VideoStreamFramer* videoSource;
RTPSink* videoSink;
class Device Camera;

void play(); // forward

EventTriggerId DeviceSource::eventTriggerId = 0;

int main(int argc, char** argv) {
  // Begin by setting up our usage environment:
  TaskScheduler* scheduler = BasicTaskScheduler::createNew();
  env = BasicUsageEnvironment::createNew(*scheduler);

  // Create 'groupsocks' for RTP and RTCP:
  struct in_addr destinationAddress;
  destinationAddress.s_addr = chooseRandomIPv4SSMAddress(*env);
  // Note: This is a multicast address.  If you wish instead to stream
  // using unicast, then you should use the "testOnDemandRTSPServer"
  // test program - not this test program - as a model.

  const unsigned short rtpPortNum = 18888;
  const unsigned short rtcpPortNum = rtpPortNum+1;
  const unsigned char ttl = 255;

  const Port rtpPort(rtpPortNum);
  const Port rtcpPort(rtcpPortNum);

  Camera.Init();
  mkfifo(inputFileName, 0777);
  if(0 == fork())
  {
	Camera.pipe_fd = fopen(inputFileName, "w");
	if(NULL == Camera.pipe_fd)
	{
		printf("===============child process open pipe err =======\n ");
	}
	while(1)
	{
		usleep(15000);
		Camera.getnextframe();
	}
	
  }

  Groupsock rtpGroupsock(*env, destinationAddress, rtpPort, ttl);
  rtpGroupsock.multicastSendOnly(); // we're a SSM source
  Groupsock rtcpGroupsock(*env, destinationAddress, rtcpPort, ttl);
  rtcpGroupsock.multicastSendOnly(); // we're a SSM source

  
		
  // Create a 'H264 Video RTP' sink from the RTP 'groupsock':
  OutPacketBuffer::maxSize = 600000;
  videoSink = H264VideoRTPSink::createNew(*env, &rtpGroupsock, 96);

  // Create (and start) a 'RTCP instance' for this RTP sink:
  const unsigned estimatedSessionBandwidth = 10000; // in kbps; for RTCP b/w share
  const unsigned maxCNAMElen = 100;
  unsigned char CNAME[maxCNAMElen+1];
  gethostname((char*)CNAME, maxCNAMElen);
  CNAME[maxCNAMElen] = '\0'; // just in case
  RTCPInstance* rtcp
  = RTCPInstance::createNew(*env, &rtcpGroupsock,
			    estimatedSessionBandwidth, CNAME,
			    videoSink, NULL /* we're a server */,
			    True /* we're a SSM source */);
  // Note: This starts RTCP running automatically

  RTSPServer* rtspServer = RTSPServer::createNew(*env, 8554);
  if (rtspServer == NULL) {
    *env << "Failed to create RTSP server: " << env->getResultMsg() << "\n";
    exit(1);
  }
  ServerMediaSession* sms
    = ServerMediaSession::createNew(*env, "testStream", inputFileName,
		   "Session streamed by \"testH264VideoStreamer\"",
					   True /*SSM*/);
  sms->addSubsession(PassiveServerMediaSubsession::createNew(*videoSink, rtcp));
  rtspServer->addServerMediaSession(sms);

  char* url = rtspServer->rtspURL(sms);
  *env << "Play this stream using the URL \"" << url << "\"\n";
  delete[] url;

  // Start the streaming:
  *env << "Beginning streaming...\n";
  play();

  env->taskScheduler().doEventLoop(); // does not return

  return 0; // only to prevent compiler warning
}

void afterPlaying(void* /*clientData*/) {
  *env << "...done reading from file\n";
  videoSink->stopPlaying();
  Medium::close(videoSource);
  Camera.Destory();
  // Note that this also closes the input file that this source read from.

  // Start playing once again:
  play();
}

void play() {
  // Open the input file as a 'byte-stream file source':
  ByteStreamFileSource* fileSource
    = ByteStreamFileSource::createNew(*env, inputFileName);
  if (fileSource == NULL) {
    *env << "Unable to open file \"" << inputFileName
         << "\" as a byte-stream file source\n";
    exit(1);
  }

  FramedSource* videoES = fileSource;

  // Create a framer for the Video Elementary Stream:
  videoSource = H264VideoStreamFramer::createNew(*env, videoES);

  // Finally, start playing:
  *env << "Beginning to read from file...\n";
  videoSink->startPlaying(*videoSource, afterPlaying, videoSink);
}
   The download, compilation and installation of the x264 and live555 libraries have been discussed in the previous blog and will not be repeated here. Compile and run the entire project as follows:

[root@redhat pipelive]# ls
H264FramedLiveSource.cpp  H264FramedLiveSource.o  H264VideoStreamer  H264VideoStreamer.cpp  H264VideoStreamer.o  include  lib  Makefile
[root@redhat pipelive]# ./H264VideoStreamer

camera driver name is : uvcvideo
camera device name is : UVC Camera (046d:0825)
camera bus information: usb-0000:00:1a.0-1.1
n_buffer = 4
x264 [warning]: lookaheadless mb-tree requires intra refresh or infinite keyint
x264 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
x264 [info]: profile High 4:2:2, level 3.0, 4:2:2 8-bit
Play this stream using the URL "rtsp://192.168.0.127:8554/testStream"
Beginning streaming...
Beginning to read from file...
    Open the network stream rtsp://192.168.0.127:8554/testStream in the VLC player to see the data in the camera:




     In the actual test, there will be some delay in using the pipeline to transmit the camera data. If the picture size is set to 320*240, the delay will be very small, but if the picture size is set to 640*480, the delay will be very obvious. This is also related to the camera I use, the UVC camera will be relatively slow to read data.

    The complete project can be downloaded from here: live555 live streaming camera data from v4l2



   =================Updated on 2018-03-16====================

   Because the project contains x264 dynamic library files, after a netizen downloaded the project file, the anti-virus software reported a virus... After being reported, the download link became invalid...

   If you need engineering code for reference, you can leave an email in the comment area, and you will email it to those who need it.



    



Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324765473&siteId=291194637