Summary of IoT live streaming technology learning

Because the work involves live streaming media, the following article records streaming media playback content, which comes from the strong support of colleagues and the Internet.

table of Contents

The first part of streaming 

A technology streamline diagram

Second-stream media internal structure diagram

Three-stream media analysis sequence diagram

​Four Live Protocol​

The second part of the streaming media server (SRS)

1. Installation

Two, start/stop

Three, push streaming test

Four, pull flow test

references


The first part of streaming 

IoT streaming technology roadmap

A technology streamline diagram

The technical roadmap is divided into C/C++ and java series. The following figure introduces the java series for reference

Second-stream media internal structure diagram

 

Three-stream media analysis sequence diagram


Four live broadcast protocol

The second part of the streaming media server (SRS)

SRS (Simple RTMP Server) is a very good open source streaming media server software written by Chinese people, which can be used in live broadcast/recording/video customer service and other scenarios. It is positioned as an operation-level Internet live broadcast server cluster.

1. Installation

The official website provides three installation methods: compile and install from source code, use docker (for learning), and directly download and install the installation package

I believe that the main purpose of many people is to learn how to use srs, and may not be good at the C++ language itself, so this article only introduces "how to use the installation package" to install and deploy.

First download the latest stable release version from the official website (currently 3.0.97)  http://ossrs.net/srs.release/releases/files/SRS-CentOS7-x86_64-3.0.97.zip

Then upload it to the centos server, unzip it to the specified directory, and execute the following command to install:

1.1 mac version installation

sudo ./INSTALL

 

click to see the work

After the installation is successful, a prompt similar to the above picture will appear.

If there is an error message of No package lsb_release available. (this error is prone to appear on centos 7), you can try to install yum install -y redhat-lsb first

tips: If you want to uninstall, execute the following command (provided that srs is stopped first)

sudo rm -rf /usr/local/srs
sudo rm -rf /etc/init.d/srs

1.2 centos7 installation (git source installation)

installation

[root@localhost cuiyaonan]# yum install -y git #Add git to the system

[root@localhost cuiyaonan]# git clone https://git.oschina.net/winlinvip/srs.oschina.git #download srs

[root@localhost cuiyaonan]# cd srs.oschina/

[root@localhost srs.oschina]# git pull

[root@localhost srs.oschina]# git branch -a

[root@localhost srs.oschina]# git checkout 2.0release

[root@localhost srs.oschina]# cd trunk/

[root@localhost srs.oschina]# ./configure

[root@localhost srs.oschina]# ./make

Configuration

# srstest.conf

listen              19351;

srs_log_tank file; #The configuration log promises to the file, which needs to be used in conjunction with srs_log_level

srs_log_file ./objs/srs-edge.log; #Specify the location of the log file.

srs_log_level trace; #Develop the level of the configuration file, the default level is trace

daemon on; #Start as a daemon. If you want to start in the console, you need to configure daemon off; and, you need to configure srs_log_tank console;

max_connections 1000; #Maximum number of connections

pid objs/cuiyaonan-edge.pid;

#This is related to the configuration of http-flv

http_server {

    enabled         on;

    listen          8081;

    dir ./objs/nginx/html;

}

 

vhost __defaultVhost__ {

   

#This is related to the configuration of http-flv

     http_remux {

        enabled     on;

        mount       [vhost]/[app]/[stream].flv;

        hstrs       on;

     }

 

#This is the configuration of hls

 hls {

        enabled         on;

        hls_fragment    10;

        hls_window      60;

        hls_path        ./objs/nginx/html;

        hls_m3u8_file   [app]/[stream].m3u8;

        hls_ts_file     [app]/[stream]-[seq].ts;

 }

}

 

Two, start/stop

2.1 Start

sudo /etc/init.d/srs start

2.2 Stop

sudo /etc/init.d/srs stop

Similarly, in addition to start/stop, there are other options, such as: reload|status

1 [~]$ /etc/init.d/srs
2 Usage: /etc/init.d/srs {start|stop|status|restart|reload}

 

Three, push streaming test

Push streaming: Publish video to streaming media server (support local file or camera as video source)

 

3.1 Push streaming with FFmpeg

FFmpeg is a set of open source software that can be used to record, convert digital audio and video, and convert them into streams, and use it to push native video files to SRS. After downloading from the FFmpeg official website, directly unzip it to this machine to run.

tips: The srs source code comes with a sample flv, the path is ./trunk/doc/source.200kbps.768x320.flv

Push streaming command:

./ffmpeg -re -i source.200kbps.768x320.flv -vcodec copy -acodec copy -f flv -y rtmp://srs_server_ip:1

Note: The full path of flv and srs server ip in the above command should be replaced with actual values ​​according to the situation. In addition: By default, the rtmp of srs uses port 1935. If the port is occupied or the port in srs.conf is modified, please adjust it according to the situation; if the firewall is turned on, check whether 1935 is allowed to access.

 

./ffmpeg -re -i /Users/jimmy/code/srs/trunk/doc/source.200kbps.768x320.flv -vcodec copy -acodec copy -f flv -y rtmp://*.*.*.*:1935/live/livestream
ffmpeg version 4.1.3-tessus  https://evermeet.cx/ffmpeg/  Copyright (c) 2000-2019 the FFmpeg developers
  built with Apple LLVM version 10.0.1 (clang-1001.0.46.3)
  configuration: --cc=/usr/bin/clang --prefix=/opt/ffmpeg --extra-version=tessus --enable-avisynth --enable-fontconfig --enable-gpl --enable-libaom --enable-libass --enable-libbluray --enable-libfreetype --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libmysofa --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopus --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-version3 --pkg-config-flags=--static --disable-ffplay
  libavutil      56. 22.100 / 56. 22.100
  libavcodec     58. 35.100 / 58. 35.100
  libavformat    58. 20.100 / 58. 20.100
  libavdevice    58.  5.100 / 58.  5.100
  libavfilter     7. 40.101 /  7. 40.101
  libswscale      5.  3.100 /  5.  3.100
  libswresample   3.  3.100 /  3.  3.100
  libpostproc    55.  3.100 / 55.  3.100
Input #0, flv, from '/Users/jimmy/code/srs/trunk/doc/source.200kbps.768x320.flv':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf54.63.104
  Duration: 00:03:30.73, start: 0.034000, bitrate: 251 kb/s
    Stream #0:0: Video: h264 (High), yuv420p(progressive), 768x320 [SAR 1:1 DAR 12:5], 212 kb/s, 25 fps, 25 tbr, 1k tbn, 50 tbc
    Stream #0:1: Audio: aac (LC), 44100 Hz, stereo, fltp, 30 kb/s
Output #0, flv, to 'rtmp://10.2.72.62:1935/live/livestream':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.20.100
    Stream #0:0: Video: h264 (High) ([7][0][0][0] / 0x0007), yuv420p(progressive), 768x320 [SAR 1:1 DAR 12:5], q=2-31, 212 kb/s, 25 fps, 25 tbr, 1k tbn, 1k tbc
    Stream #0:1: Audio: aac (LC) ([10][0][0][0] / 0x000A), 44100 Hz, stereo, fltp, 30 kb/s
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
  Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
frame=  508 fps= 25 q=-1.0 size=     693kB time=00:00:20.24 bitrate= 280.4kbits/s speed=   1x

 

 If it goes well, you will see an output similar to the above, indicating that the video stream is being pushed to srs.

 

3.2 Use obs to collect camera and push stream

The command line of FFmpeg is not very friendly. It is recommended to use the mainstream OBS open source streaming software. You can  download the latest version from the official website https://obsproject.com/ . At present, many webcasters use it for live streaming. The software supports local video files and camera push streaming.

 

3.2.1 File push

First add one: "media source"

 Then create a new name (just enter it)

Select native video file

 Next is the key part, enter the settings:

Click to see the source image

 In Stream -> Server, enter the address of srs: rtmp://srs_server_ip:1935/live (note that no livestream is included here), and then in the streaming key, enter livestream

Click to see the source image

After setting, you can click "Start Push Stream". If it goes well, the status bar below will display some real-time data:

Click to see the source image

 

3.2.2 Camera push streaming

It's almost the same as the above, the difference is just adding "video capture device"

Then select the camera device detected by the machine (for example: the following picture shows the FaceTime HD Camera that comes with the mac notebook)

The next operation is the same:

 

Four, pull flow test

Pull stream: read the video stream from the streaming media server (to put it bluntly: just play)

It is recommended to use the open source VLC player  (of course, any other player that supports the playback of network media sources will do), File -> Open Network, enter the address rtmp://srs_server_ip:1935/live/livestream 

Click to see the source image

If it goes well, it can be played normally.

Click to see the source image

I have suspended it due to work reasons. I have the opportunity or interest to write in the future. Thank you for your support.

references

 

https://www.cnblogs.com/yjmyzz/p/srs_study_1_install_push_and_pull_stream.html
http://ossrs.net/srs.release/releases/

Guess you like

Origin blog.csdn.net/u013380694/article/details/105536005