"Autonomous Driving Intelligent Early Warning Application Solution Based on AidLux"

Convert YOLOP model to ONNX

ONNX is the abbreviation of Open Neural Network Exchange. The specifications and codes of ONNX are mainly jointly developed by Microsoft, Amazon, Facebook, IBM and other companies, and are hosted on Github as open source code. Currently, the frameworks that officially support loading ONNX models include: Caffe2, PyTorch, MXNet, ML.NET, TensorRT and Microsoft CNTK, and TensorFlow also unofficially supports ONNX.
Insert image description here

YOLOP export onnx model

Execute the command:
python3 export_onnx.py–height 640–width 640. After the execution is completed, the successfully converted onnx model onnx conversion core API
will be generated in the weights folder :

Insert image description here

onnx model export process:

1. Load the PyTorch model. You can choose to load only the model structure; you
can also choose to load the model structure and weights.
2. Define the input dimensions of the PyTorch model, such as (1,3,640,640),
which is a three-channel color image with a resolution of 640x640.
3. Use the torch.onnx.export() function to convert and produce the onnx
model.
Insert image description here

reasoning process

1. Load the model
2. Get the input and output node names
3. Prepare data, such as (n, c, h, w) or
(n, h, w, c)
4. Perform inference and obtain the output
Insert image description here

Visualizing ONNX models

Use Netron to visualize the ONNX model and take a look at
the network structure; check which operators are used to facilitate
development and deployment.
Netron is a lightweight, cross-platform model visualization
tool that supports model
visualization of multiple deep learning frameworks, including TensorFlow, PyTorch,
ONNX, Keras, Caffe, etc. It provides visual
network structure, hierarchical relationships, output size, weight
and other information, and
the model can be browsed by mouse movement and zoom. Netron also supports the export and import of models
to facilitate model sharing and communication.
Insert image description here

Deployment and application of YOLOP model on AidLux

Introduction to AidLux

AidLux is an intelligent Internet of Things (AIoT) application development and deployment platform built on ARM hardware and based on an innovative
cross-Android/Hongmeng+Linuxi converged system environment . AidLux software is very easy to use and can be installed on edge devices such as mobile phones, PADs, and ARM development boards. Moreover, during the development process using AidLux, it not only supports local development on the edge device, but also supports access to the edge desktop through a web browser for development. AidLux can be downloaded from all major application stores. As shown on the right, search for, download and install AidLux in the mobile application store.








Insert image description here

AidLux: programming interface

AidLux interface introduction link
https://docs.aidlux.com/#/intro/ai/ai-aidlite
Insert image description here

Connect AidLux

Connect the mobile phone's wifi network to the computer's network, open the installed AidLuxa software on the mobile phone, and click on the second Cloud_ip in the first row. A
1P URL that can be logged in on the computer will pop up on the mobile phone interface. In the computer's browser On the computer, just input a P to project the system of the mobile phone to
the computer. Once connected, you can use the computing power of the mobile phone to perform model inference.
Insert image description here

Upload project to AidLux

1. Click the file browser to open the file management page
2. Find the home folder and double-click to enter the folder
3. Click the upward arrow "upload" in the upper right corner, then select Folder to upload the previous YOLOP folder to home within the folder. (You can also drag the folder directly into the directory.)
Insert image description here

Installation Environment

1. Open the terminal and switch to the project directory
2. Execute the command: pip install-r requirements.txt to install the dependent environment
3. Install pytorch, torchvision, onnxruntime

pip install torch==1.8.1  torchvision==0.9.1 -i https://pypi.mirrors.ustc.edu.cn/simple/
pip install onnxruntime-i https://pypi.mirrors.ustc.edu.cn/simple/

If other packages are missing, you can install them directly using pip install.
Insert image description here

Run demo.py

To verify the inference effect, execute the command:
python tools/demo.py --source inference/images.
Error when running: module'cv2'has no attribute'_registerMatType'
. Solution: Uninstall opencv-python and opencv-contrib-python and only install the lower version. opencv-contrib-python

pip install opencv_python==4.5.4.60 -i https://pypi.mirrors.ustc.edu.cn/simple/

After running successfully, the result file will be stored in the inference/output folder. You can go to this folder to view the inference results.
Insert image description here

Intelligent early warning system code practice

Intelligent early warning

To verify the inference effect, execute the command:
python tools/demo.py --source inference/images.
Error when running: module'cv2'has no attribute'_registerMatType'
. Solution: Uninstall opencv-.python, opencv-contrib-python and only install the lower version. opencv-.contrib-python

pip install opencv_python==4.5.4.60 -i https://pypi.mirrors.ustc.edu.cn/simple/

After successful operation, the result file will be stored in the inference/output folder. You can go to this folder to view the inference results. The intelligent early
Insert image description here
warning system contains 3 tasks:
target detection, drivable area detection, and lane line detection
. Sensor: forward-looking camera
Target detection task: detect vehicles.
Drivable area detection: mainly detects
areas that can be driven to provide path planning assistance for autonomous driving.
Lane line detection: is an environment perception application whose purpose
is to detect lanes through on-board cameras or lidar. Line
Insert image description here
Insert image description here
1. Input:
Read video image as input, image size 1920 1080
Insert image description here
2. Preprocessing
2.1 Resize input size 1920
1080 resize+padding? to 640 640
2.2 Normalize
2.3 640
640 3->1 3 640 640
Insert image description here
Insert image description here
3. Use onnx model for inference,
read the model -> prepare data -> inference and
get det_out, da_seg_out, ll_seg_out, shape: respectively: (1,n,6)(1,2,640,640)(1,2,640,640)
Insert image description here
4. Post-processing
4.1 will detect As a result, the drivable area detection results and lane line detection results are merged into one image and marked with different colors. 4.2
Display the detected frame number, frame rate, vehicle number and other information on the image
Insert image description here
5. Output
: Obtain the final fused image and save it as a video. The image size, rate, and encoding are the original video size, frame rate, and encoding.
Insert image description here

Alert code

forewarning.py is the intelligent warning code a.
When executing the command: python forewarning.py displays Chinese, if an error is reported, please refer to the following solutions.
Error: OSError: cannot open resource lacks Chinese fonts.
Solution: Upload simsun.ttc to /usr /share/fonts/ folder (simsun.ttc has been stored in Baidu Cloud Disk)
Insert image description here

Final reasoning process and results

The inference results are as shown in the video of b station: Intelligent early warning application solution for autonomous driving based on AidLux

Guess you like

Origin blog.csdn.net/heromps/article/details/131480638
Recommended