Recently, I am using Agora's RTSA, and I want to use my own Raspberry Pi to do remote image transmission and other functions.
RTSA does not support audio and video capture and encoding, but only provides channel capabilities, so the audio and video capture and encoding on the Raspberry Pi needs to be done by itself.
Researched the acquisition and encoding of two types of cameras on the market. One is the camkit open source project (because the project is too old and there are too many pitfalls); the other is the current Raspberry Officially recommended by Pai, with more powerful functions, through libcamera ; the two methods also have some differences in software processing:
The first type: Legacy stack
Legacy stack is the camera method supported by previous versions of raspberry OS Bullseye. Many python-based libraries on the Internet are based on Legacy stack, which can be opened through raspi-config configuration
raspi-config
Select [ 3 Interface Options] --> [I1 Legacy Camera] --> [Yes]
The three-party library supported by the Legacy stack has a relatively complete camkit.
download camkit
The function introduction of camkit will not be introduced here. There are a lot of popular science articles on csdn. This article tries to be as dry as possible and introduces the whole use process.
download link:
https://gitee.com/daiyinger/Camkit.git
This project includes video capture and encoding, but because the project is relatively old, the dependent ffmpeg library is ≤ ffmpeg-4.4.1, and some interfaces of ffmpeg-5.x cannot be found.
ffmpeg-4.4.1 compile
Refer to the article FFMPEG compilation and compilation.
wget http://www.ffmpeg.org/releases/ffmpeg-4.4.1.tar.bz2
tar zxvf ffmpeg-4.4.1.tar.bz2
./configure --prefix=/your/install/path \
--enable-gpl\
--enable-nonfree\
--enable-libfdk-aac\
--enable-libx264\
--enable-libx265\
--enable-filter=delogo\
--enable-debug\
--disable-optimizations\
--enable-libspeex\
--enable-shared\
--enable-pthreads
make -j4
make install
export LD_LIBRARY_PATH=/your/install/path/lib:$LD_LIBRARY_PATH
export PKG_CONFIG_PATH=/your/install/path/lib/pkgconfig:$PKG_CONFIG_PATH
compile camkit
After installing the above ffmpeg-4.4.1, compile camkit by
cmake -S . -B build
cmake --build build -j4
Compiled successfully:
vcpkg
The above ffmpeg-4.4.1 compilation needs to rely on many third-party libraries, such as libx264, so this article introduces the installation of ffmpeg-4.4.1 through vcpkg by the way. The latest version under x86-linux officially supported by vcpkg is ffmpeg-4.4.1. Just install it directly.
vcpkg search ffmpeg
vcpkg install ffmpeg[x264]
vcpkg install ffmpeg-4.4.1
github: https://github.com/microsoft/vcpkg/blob/master/README_zh_CN.md
Official website address: https://vcpkg.io/en/getting-started.html
Online documentation: https://vcpkg.readthedocs. io/en/latest/README/
package search: https://vcpkg.io/en/packages.html
git clone https://github.com/microsoft/vcpkg
./bootstrap-vcpkg.sh -disableMetrics
./vcpkg install ffmpeg[x264]
How to install the old version ffmpeg-3.3.1 through vcpkg
In some scenarios, a better version of the software is required, and vcpkg supports the version
git log --color=always --pretty='%Cred%h%Creset -%C(auto)%d%Creset %s %Cgreen(%ad)' --date=short | grep --color=never ffmpeg
git checkout 779307a10
vcpkg install ffmpeg
Unfortunately, ffmpeg-3.3.3 version vcpkg does not support
root@locakhost# vcpkg install ffmpeg
Error: invalid triplet: x64-linux
Available architecture triplets
VCPKG built-in triplets:
x64-windows-static
x86-windows-static
x86-uwp
x64-uwp
x64-windows
arm-uwp
x86-windows
VCPKG community triplets:
It seems that you have to compile it yourself, leave a record, and record the use of vcpkg tools.
cmake compile
If you can:
cmake -B [build directory] -S . -DCMAKE_TOOLCHAIN_FILE=[path to vcpkg]/scripts/buildsystems/vcpkg.cmake
Include vcpkg as a submodule
When you want to include vcpkg as a submodule in your project, you can add the following to CMakeLists.txt before the first project() call, without passing CMAKE_TOOLCHAIN_FILE to the CMake call.
set(CMAKE_TOOLCHAIN_FILE "${CMAKE_CURRENT_SOURCE_DIR}/vcpkg/scripts/buildsystems/vcpkg.cmake"
CACHE STRING "Vcpkg toolchain file")
In this way, vcpkg can be used without setting CMAKE_TOOLCHAIN_FILE, and it is easier to complete the configuration work.
camkit learning
Learning starts from cktool.c.
capture
camkit sets the target platform through the PLAT parameter of cmake. This article uses the parameter PLAT=PC, and tests based on linux. The linux capture uses the v4l2 framework. The v4l2de initialization process can refer to: linux calls v4l2 to obtain video, embedded Linux: V4L2 video capture operation process and interface description .
The second type: libcamera
Raspberry Pi supports 3 versions of cameras by default, namely 1) OV5647 or V1 camera; 2) IMX219 or V2 camera; 3) IMX477 or HQ camera. Of course, in addition to these three sensors, it also supports IMX290, IMX327, OV9281, IMX378.
libcamera provides a series of C++ APIs for applications to configure the camera and obtain the frame data of the camera. The acquired image data can be directly used for image coding or video coding. However, work related to coding and rendering is not included.
All Raspberry Pi officially provides a set of libcamera-based application layer toolkit libcamera-apps, and simulates a set of interfaces (raspstill and raspivid) of the legacy stack's Broadcom-based GPU:
libcamera-hello: A simple "hello world" application that streams from a camera and renders to the screen.
libcamera-jpeg: A small application for capturing high-definition images.
libcamera-still: A more complex image capture application with similar functionality to raspistill.
libcamera-vid: Video grabber.
libcamera-raw: An application for directly capturing raw frames (Bayer format).
libcamera-detect: This application does not have it by default, but after installing TensorFlow Lite on the Raspberry Pi, you can use this library to trigger the capture of a JPEG image when an object is detected.
In addition to the command line of libcamera-apps, users can also use it as a reference example to create their own applications. You can refer to the github library, https://github.com/raspberrypi/libcamera-apps .
libcamera
git clone https://git.libcamera.org/libcamera/libcamera.git
cd libcamera
pip3 install --user meson==0.63.3 # libcamera need version>0.56
meson build --prefix /your/path/to/install
ninja -C build install
More detailed compilation reference building-libcamera
libcamera-hello
libcamera-hello -qt-preview -t 0 # 正常不需要加入-qt-preview,由于我使用的vnc,加入-qt-preview参数显示摄像头stream窗口
libcamera-jpeg
libcamera-jpeg -o test.jpg
libcamera-still
libcamera-still -o test.jpg
#Encoders
libcamera-still -e png -o test.png
libcamera-still -e bmp -o test.bmp
libcamera-still -e rgb -o test.data
libcamera-still -e yuv420 -o test.data
#Raw Image Capture
libcamera-still -r -o test.jpg
#Very long exposures
libcamera-still -o long_exposure.jpg --shutter 100000000 --gain 1 --awbgains 1,1 --immediate
libcamera-vid
# basic
libcamera-vid -t 10000 -o test.h264
vlc test.h264
# Encoders
libcamera-vid -t 10000 --codec mjpeg -o test.mjpeg
libcamera-vid -t 10000 --codec yuv420 -o test.yuv
# Network Streaming
# UDP
libcamera-vid -t 0 --inline -o udp://<ip-addr>:<port>
# paly
vlc udp://@:<port> :demux=h264
ffplay udp://<ip-addr-of-server>:<port> -fflags nobuffer -flags low_delay -framedrop
# TCP
libcamera-vid -t 0 --inline --listen -o tcp://0.0.0.0:<port>
# play
vlc tcp/h264://<ip-addr-of-server>:<port>
ffplay tcp://<ip-addr-of-server>:<port> -vf "setpts=N/30" -fflags nobuffer -flags low_delay -framedrop
# RTSP
libcamera-vid -t 0 --inline -o - | cvlc stream:///dev/stdin --sout
'#rtp{sdp=rtsp://:8554/stream1}' :demux=h264
# play
vlc rtsp://<ip-addr-of-server>:8554/stream1
ffplay rtsp://<ip-addr-of-server>:8554/stream1 -vf "setpts=N/30" -fflags nobuffer -flags low_delay -framedrop
# libcamera-vid 可以配合ffmpeg libav,但是测试没有通过
# 参考 https://www.raspberrypi.com/documentation/accessories/camera.html#libav-integration-with-libcamera-vid
libcamera-raw
Not used much, get bayer frames.
libcamera-detect
Rely on TensorFlowLite, if you are interested, you can refer to libcamera-detect , which will not be explained in this article.