Article directory
environment dependence
software | Version | illustrate | method of obtaining |
---|---|---|---|
mxVision | 5.0.RC2 | mxVision package | method of obtaining |
Ascend-CANN-toolkit | 6.2.RC2 | Ascend-cann-toolkit development kit package | method of obtaining |
Ubuntu | 22.04 |
Code warehouse address:
https://gitee.com/ascend/ascend_community_projects/tree/310B/HelmetIdentification_V2
Mirror version:
compile
Unzip model.zip and move the onnx file to the model directory of the project:
Conversion model: Note that the atc-env.sh
conversion script in the source code is not used here.
Configure environment variables:
source /usr/local/Ascend/ascend-toolkit/set_env.sh
source /usr/local/Ascend/mxVision-5.0.RC2/set_env.sh
Transfer model:
cd ~/HelmetIdentification_V2/model
atc --model=./YOLOv5_s.onnx --framework=5 --output=YOLOv5_s --insert_op_conf=./aipp_YOLOv5.config --input_format=NCHW --log=info --soc_version=Ascend310B1 --input_shape="images:1,3,640,640"
After executing the above command line, the model
model will appear in the directory om
Modify CMakeList.txt
cd ~/HelmetIdentification_V2/src
Change the 24
and 35
lines to in the /usr/local/Ascend/ascend-toolkit/latest/aarch64-linux
directory and directory, you need to change it to your own directoryinclude
lib64
toolkit
Compilation preparation
Update installation library files
apt-get update
apt-get install -y libavformat-dev
Create soft links to three libraries
ln -s /usr/lib/aarch64-linux-gnu/libavcodec.so.58 /usr/lib/aarch64-linux-gnu/libavcodec.so
ln -s /usr/lib/aarch64-linux-gnu/libavutil.so.56 /usr/lib/aarch64-linux-gnu/libavutil.so
ln -s /usr/lib/aarch64-linux-gnu/libavformat.so.58 /usr/lib/aarch64-linux-gnu/libavformat.so
Since the code provides the difference between video
and , here we choose . After backing up the two files, leaveimage
main.cpp
video
main.cpp
cd ~/HelmetIdentification_V2/src
rm main-image.cpp
cd ..
mkdir build_video
cd build_video
cmake ..
make -j4
At this point, the executable file will be generated in the ~/HelmetIdentification_V2
directory.main
test
Under the HelmetIdentification_V2
folder, you need to create a new result
folder and two internal one、two
folders for Store results
cd ~/HelmetIdentification_V2
mkdir result
cd result
mkdir one
mkdir two
Return toHelmetIdentification_V2
folder and run the following command
./main test_person.h264 1920 1080
1920 1080
is the width and height of the input video
The resulting image is saved in the folderHelmetIdentification_V2/result
, and the folderone
saves the first The results of the first input, the foldertwo
saves the results of the second input.
The output sample is:
The red box marked should be the information that the helmet is not worn.
Summarize
This article can be regarded as an introductory example of the Shengteng series. Once you run through it, you should be able to know the general operation process. What follows should be another unforgettable development journey. I hope you can hold on! ! !
Reference documents:
https://gitee.com/ascend/ascend_community_projects/tree/310B/HelmetIdentification_V2
https://zhuanlan.zhihu .com/p/652517700
If reading this article is useful to you, please like and save it! ! !
November 24, 2023 14:55:12