JetsonNano + OpenCV + construction method Gstreamer achieve captured by the camera and works

JetsonNano + OpenCV + construction method Gstreamer achieve captured by the camera and works

After starting the official image contains Nano found OpenCV does not support python3. Seeing python2 To exit the trend of the times, here finishing the rebuilding process steps OpenCV and briefly explains the JetsonNano + OpenCV + works Gstreamer structure.

I. Reference Document

https://jkjung-avt.github.io/setting-up-nano/
https://github.com/jkjung-avt/jetson_nano
https://jkjung-avt.github.io/opencv-on-nano/
https://www.jetsonhacks.com/2019/04/02/jetson-nano-raspberry-pi-camera/
https://github.com/JetsonHacksNano/CSI-Camera
https://blog.csdn.net/u011337602/article/details/81485246
https://developer.download.nvidia.cn/embedded/L4T/r31_Release_v1.0/Docs/Accelerated_GStreamer_User_Guide.pdf

II. Environment to build

If you just brush the Nano is a mirror image, the operating environment has not changed, then build OpenCV + Gstreamer process is very simple.
1, of course, first have a Nano, and completed the initial build environment. Referring to step
2, and then, to have a camera in order to save a USB interface (with their place beyond imagination more), or buy an on-board camera, it has been tested model is available IMX-219. Reference step .
3, it is fortunate that the official image contains Gstreamer not need to reinstall. Here are installing opencv-3.4.6 version, the installation procedure is very simple:

$ sudo nvpmodel -m 0   # 把 Nano 性能开到最大模式(10W)
$ sudo jetson_clocks   
$ cd ${HOME}/project   # 开始安装
$ git clone https://github.com/jkjung-avt/jetson_nano.git
$ cd ${HOME}/project/jetson_nano
$ ./install_opencv-3.4.6.sh

4, the installation process will encounter checkpoints: Due get-pip.py network problems, and later through the process of installing pip dependent package, and will often fail because of timeout, you can try to modify ~/.pip/pip.confto avoid failure of the timeout, but the download speed is still appalling. Reference step

wget https://bootstrap.pypa.io/get-pip.py -O $folder/get-pip.py
sudo python3 $folder/get-pip.py
sudo python2 $folder/get-pip.py
sudo pip3 install protobuf
sudo pip2 install protobuf
sudo pip3 install -U numpy matplotlib
sudo pip2 install -U numpy matplotlib

My solution is to copy the download is displayed during installation, connection, download the installation package (may need FanQiang), then by pip3 install numpy-1.17.2.zip.
5, if it fails, the script executes an interrupt occurs during installation, it does not matter, while re-run on the line. If determined to study, you can also copy your own files in the script, line by line.

jetsonnano:~/Jetson-Nano/RoBFang/opencv$ sudo pip3 install -U numpy matplotlib
Collecting numpy
  Downloading https://files.pythonhosted.org/packages/ac/36/325b27ef698684c38b1fe2e546e2e7ef9cecd7037bcdb35c87efec4356af/numpy-1.17.2.zip (6.5MB)

III. Test code

1, the installation is complete, can not wait to be a test, the first test separately on the command line:

# 测试摄像头
$ gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=2 ! 'video/x-raw,width=960, height=616' ! nvvidconv ! nvegltransform ! nveglglessink -e
# 测试 OpenCV
$ python3 -c 'import cv2; print("python3 cv2 version: %s" % cv2.__version__)'
$ python2 -c 'import cv2; print("python2 cv2 version: %s" % cv2.__version__)'

2, then, to find a basis of OpenCV recognize faces code to try:

wget https://github.com/JetsonHacksNano/CSI-Camera/archive/master.zip -O CSI-Camera.zip
unzip CSI-Camera.zip
cd CSI-Camera
python3 face_detect.py

There should encounter an error, because the file path detector does not use locate haarcascade_eye.xmllooking just fine.

face_detect.py:
    face_cascade = cv2.CascadeClassifier(
        "~YourPath~/haarcascade_frontalface_default.xml"
    )
    eye_cascade = cv2.CascadeClassifier(
        "~YourPath~/haarcascade_eye.xml"
    )

IV. Works

1, in the test code, we can see, even without OpenCV, simple and practical gst-launch-1.0will be able to display camera video shoot. Therefore, easy to understand, their relationship is: the original video stream data captured Gstreamer camera; the OpenCV frame (Frame) as a unit, for processing (face recognition), and after calculation unfolded.
2, Gstreamer the pipeline up element is connected in series by a plurality of data streams to the sink from the src reference documentation

1# nvarguscamerasrc ! 
2# 'video/x-raw(memory:NVMM), width=3820, height=2464, framerate=21/1, format=NV12' ! 
3# nvvidconv flip-method=2 ! 
4# 'video/x-raw,width=960, height=616' ! 
5# nvvidconv ! 
6# nvegltransform ! 
7# nveglglessink -e
Grade Roles effect
1 src element Get the camera data through ARGUS API
2 capabilities Original schedule data source format (3820x2464), NVMM specified should be allocated in contiguous memory buffer
3 common element Format conversion: a vertical flip image
4 capabilities Schedule data output format (960x616)
5 common element Format Conversion
6 common element Convert EGLimage, EGL rendering API and the interface between the native window system
7 end element EGL video display to the X11 desktop

3, OpenCV face recognition code address

face_cascade = cv2.CascadeClassifier("~YourPath~/haarcascade_frontalface_default.xml")
eye_cascade = cv2.CascadeClassifier("~YourPath~/haarcascade_eye.xml")
cap = cv2.VideoCapture(gstreamer_pipeline(), cv2.CAP_GSTREAMER)
if cap.isOpened():
    cv2.namedWindow("Face Detect", cv2.WINDOW_AUTOSIZE)
    while cv2.getWindowProperty("Face Detect", 0) >= 0:
        ret, img = cap.read()
        gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
        faces = face_cascade.detectMultiScale(gray, 1.3, 5)
        for (x, y, w, h) in faces:
            cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
            roi_gray = gray[y : y + h, x : x + w]
            roi_color = img[y : y + h, x : x + w]
            eyes = eye_cascade.detectMultiScale(roi_gray)
            for (ex, ey, ew, eh) in eyes:
                cv2.rectangle(roi_color, (ex, ey), (ex + ew, ey + eh), (0, 255, 0), 2)
        cv2.imshow("Face Detect", img)
        keyCode = cv2.waitKey(30) & 0xFF
        # Stop the program on the ESC key
        if keyCode == 27:
            break
Function name effect
CascadeClassifier Load feature classifier (face / eye)
VideoCapture Reads the video
namedWindow Create a new display window
getWindowProperty The window is closed (off -1)
cvtColor Change the color space
rectangle Draw a rectangle
face_cascade.detectMultiScale Pictures detected in all of the human face
eye_cascade.detectMultiScale Pictures detected in all of the human eye
imshow The picture is displayed
waitKey Wait for key (ESC) 30ms
Published 18 original articles · won praise 1 · views 964

Guess you like

Origin blog.csdn.net/ManWZD/article/details/102540318