3C on-board intelligent identification of flying paddle-assisted EMUs to escort the operation of EMUs

2955a5e7acb9c34a053ed6b691f5d305.png

Background of the project

EMU vehicle-mounted catenary operating state detection device (3C) refers to the vehicle-mounted catenary operating state detection device installed on the operating EMU. Its follow-up EMU operation, all-weather constant velocity online detection and monitoring of catenary, pantograph and catenary matching operation status, detection and monitoring of catenary geometric parameters, infrared images, visible light images and other results, and the results are used to guide catenary maintenance.

Affected by factors such as catenary tension, mechanical vibration, weather, equipment operating life, etc., typical defects are prone to occur during the operation of the catenary, such as bird damage, broken catenary hanging strings, catenary hanging strings falling off, catenary hanging strings loosening, Foreign matter entanglement, etc. Although such defects cannot directly cause serious disease results in a short period of time, they may cause major accidents such as catenary collapse after accumulation, resulting in train suspension. At the same time, during the operation of the train, due to the change of catenary parameters, foreign objects and other factors, it can cause major accidents such as hitting and destroying the pantograph. It is difficult to locate the details of the accident in a short time using conventional methods. Although the monitoring of this type of accident cannot prevent the occurrence of the current accident, after capturing the accident, it can provide accurate operation guidance for the operation of subsequent trains, and greatly reduce the impact of the accident and economic losses.

Based on 3C visible light images, this project studies how to implement a vehicle-mounted real-time intelligent identification system, which can monitor some high-risk defects in catenary detection and pantograph monitoring .

79a8a9d9acabb8d10f4ef6ccf6a4122c.png

In order to observe and capture the missing, loose, foreign objects and abnormal pantographs of catenary components that affect driving safety, 3C equipment needs to have the characteristics of all-weather, open environment, and real-time operation. Collaborative solutions to solve the difficulties of complex outdoor scenes, huge amounts of data, and real-time analysis. However , due to factors such as limited on- board hardware resources and limited power of on- board equipment , it is not conducive to on-board real-time analysis. Therefore, this project uses artificial intelligence means, relying on limited on-board computing resources to further squeeze the computing time and increase the detection content on the basis of maintaining the detection index, and realize the edge intelligent perception function of collecting image data and analyzing image defects in real time, making up for the 3C Due to the large amount of data analysis and the lack of timely analysis, important defects related to the catenary can be discovered and reported in time during the operation of the operating vehicle, providing an effective means for the intelligentization of power supply detection.

46603f3c3b702619d4d776856c01c021.png

Project Introduction

catenary detection

Data preparation and preprocessing:

Data categories: including bird damage, broken strings, broken strings, and foreign objects.

Amount of data: About 10,000 images in total.

Data division: The data set is divided by stratified sampling, and the ratio of each category in the training set, validation set, and test set is as close as possible to 6:2:2.

Preprocessing Notes:

1. The target of the data set is relatively small, and the outdoor environment often has backlight conditions, so the brightness adjustment in data enhancement must be grasped;

2. In view of the defect of ambiguity of the target object caused by the distortion of some images after resizing, it is necessary to maintain the original proportion of the image;

3. If the server has a CPU bottleneck, you can use DecodeCache instead of Decode in reader.yml to release some CPU pressure.

Model training and evaluation

For the vehicle edge detection scene, this project selects the PP-PicoDet series model based on the Paddle Detection suite of flying paddle target detection for experiments.

1. After adjusting the hyperparameters, execute the command to perform multi-card training, and enable VisualDL and evaluation during training.

1.    export CUDA_VISIBLE_DEVICES=0,1,2,3
2.    python -m paddle.distributed.launch --gpus 0,1,2,3  tools/train.py
3.    -c /xsw/train/model/C3/picodet/picodet_s_416.yml
4.    --use_vdl=true
5.    --vdl_log_dir=/xsw/train/model/C3/picodet/vdl_dir_picodet_s_416/scalar
6.    --eval

2. Observe the training status of VisualDL.

1.    visualdl --logdir \\10.2.3.25\shareXSW\train\model\C3\picodet\vdl_dir_picodet_s_416\scalar --port=8041

3. Based on the best model, calculate the single-class AP of the validation set and test set.

1.    python tools/eval.py -c /xsw/train/model/C3/picodet/picodet_s_416.yml
2.    -o weights=/xsw/train/model/C3/picodet/output/picodet_s_416/picodet_s_416/best_model
3.    --classwise

4. The evaluation results of the validation set are shown in the following table.

75d7dc3b7f8dcb4ef555fb5ec22fa1d6.png

5. The test set evaluation results are shown in the following table.

dea387ed64767ac3351065b06a4fcdac.png

 Model deployment

Because the existing equipment also needs to undertake tasks such as data acquisition, geometric parameter measurement and calculation, the available CPU resources are limited. According to the consideration of economic factors, operation and maintenance factors and power constraints, we used the idle integrated graphics card and adopted the deployment method based on OpenVINO. This method has lower inference time and lower CPU usage, leaving room for remaining services.

Model export can refer to: Paddle Detection

Model export to ONNX format tutorial: https://github.com/PaddlePaddle/Paddle Detection/blob/release/2.3/deploy / EXPORT_ONNX_MODEL.md

1. Export the propeller deployment model:

1.    python tools/export_model.py -c /xsw/train/model/C3/picodet/picodet_s_416.yml
2.    -o weights=/xsw/train/model/C3/picodet/output/picodet_s_416/picodet_s_416/best_model.params
3.    TestReader.inputs_def.image_shape=[3,416,416]
4.    --output_dir /xsw/train/model/C3/picodet/inference_model

2. Export the Paddle Detection model to ONNX:

1paddle2onnx --model_dir /xsw/train/model/C3/picodet/inference_model/picodet_s_416
2--model_filename model.pdmodel
3--params_filename model.pdiparams
4--opset_version 11
5--save_file /xsw/train/model/C3/picodet/picodet_s_416.onnx

3. Using the ONNX model based on the OpencvDNN module: It should be noted here that the DNN module of OpenCV has two backends based on OpenCV and IntelNGRAPH for integrated graphics inference. Different usage methods may cause huge changes in inference time.

1) Based on OpenCV backend

This module can directly read the ONNX model, but the support of OperationsSets in the network is limited, and more OperationsSets need to be customized.

References for existing OperationsSets support:

Thesupportedlayers

https://github.com/opencv/opencv/wiki/Deep-Learning-in-OpenCV

Custom Operations Sets reference: 

tutorial_dnn_custom_layers

https://docs.opencv.org/4.x/dc/db1/tutorial_dnn_custom_layers.html

2) Based on IntelNGRAPH backend

This module can directly read ONNX model and IR model.

a. The read ONNX model will run online after being optimized by NGRAPH. It supports relatively many OperationsSets and is easy to use;

b. Reading the IR model is the best way. After experiments, it can minimize the inference time. However, it needs to install OpenVINO first, and use it to optimize the ONNX model into an IR model. However, due to the integrated graphics driver, it should be applied as much as possible. Hardware devices after the fourth generation of Intel CPU;

Existing OperationsSets support reference: AvailableOperationsSets

https://docs.openvino.ai/latest/openvino_docs_ops_opset7.html#doxid-openvino-docs-ops-opset7

IR model conversion reference: 

ConvertaPaddlePaddleModeltoONNXandOpenVINOIR

https://docs.openvino.ai/latest/notebooks/103-paddle-onnx-to-openvino-classification-with-output.html

2. Data post-processing reference:

PicoDetPostProcess

https://github.com/PaddlePaddle/PaddleDetection/blob/develop/deploy/cpp/src/picodet_postprocess.cc

3. The inference time is shown in the table below.

GPU:Intel HD Graphics 530

CPU:Intel(R) Core(TM) i7-6700 CPU @3.40GHz 3.41GHz

VPU:Movidius MyriadX

7a37f9e75005a7a3837b59dfb49636c3.png

The project uses the picodet_lcnet_416 model, the indicators are 2% higher than the original custom model, and the reasoning speed is accelerated by 53%, which provides the possibility for further business algorithm implementation.

Pantograph monitoring

For the vehicle edge detection scene, this project selects the keypoint model experiment based on Paddle Detection .

Data preparation and preprocessing

Data category: The internal data set is designed to contain 10 key points of the pantograph; the amount of data: a total of about 2,000 images;

Data partitioning: Use stratified sampling to split the dataset so that each category is as close as possible to the 8:2 training set:validation set.

Model training and evaluation

Different from general pedestrian posture detection, the posture detection of this project is a pantograph, and the target object of the dataset will migrate. For custom dataset training based on key points, due to changes in the number of target key points and the meaning of key points, several changes need to be made:

1. Due to changes in the number of key points and target objects, some source code and configurations need to be modified:

1.    # *.yml 训练配置,仅列举因目标对象和关键点数量变化的关键参数
2.    num_joints: &num_joints 10
3.    train_height: &train_height 288
4.    train_width: &train_width 384
5.    #输出热⼒图尺⼨(宽,高)
6.    hmsize: &hmsize [96, 72]
7.    #左右关键点经图像翻转时对应关系
8.    flip_perm: &flip_perm [[0, 1], [2, 3], [4, 5], [6, 7], [8, 9]]

2. The key detection evaluation indicators in the current Paddle Detection are based on coco OKS, and the kpt_oks_sigmas parameter in the OKS comes from the coco key point dataset. Due to changes in the number of key points and labeling content of the custom key point dataset, it is necessary to modify the corresponding kpt_oks_sigmas to adapt to the custom dataset. (kpt_oks_sigmas means the data set standard deviation of each key point. On COCO, it is the standard deviation generated by 5000 different annotations of the same target. The larger the value, the consistency of the annotation of this point in the entire data set. The worse the value; the smaller the value, the better the labeling consistency of this point in the entire dataset. For example, in the coco dataset: { nose: 0.026, eyes: 0.025, ears: 0.035, shoulders: 0.079, elbows: 0.072, wrists: 0.062, Hip: 0.107, Knee: 0.087, Ankle: 0.089})

1#ppdet/metrics/metrics.py
 2COCO_SIGMAS = ...
 3#ppdet/modeling/keypoint_utils.py def oks_iou(...):
 4...
 5sigmas = ...
 6...
 7
 8# 库中pycocotools的cocoeval.py class Params:
 9...
10def setKpParams(self):
11...
12self.kpt_oks_sigmas = ...
13...

3. If you want to visualize the key point skeleton results, you also need to modify the corresponding source code:

1.    #ppdet/utils/visualizer.py
2.    EDGES= ...

4. After adjusting the hyperparameters, perform training and enable visualDL and evaluation during training.

1#训练
 2python tools/train.py -c /xsw/train/model/C3_BowPt/hrnet/dark_hrnet.yml
 3--use_vdl=true
 4--vdl_log_dir=/xsw/train/model/C3_BowPt/hrnet/vdl_dir_hrnet_dark_hrnet/scalar
 5--eval
 6
 7#观测
 8visualdl --logdir  \\10.2.3.25\shareXSW\train\model\C3_BowPt\hrnet\vdl_dir_hrnet_dark_hrnet\scalar  --port
 9=8041
10
11#评价
12python tools/eval.py -c /xsw/train/model/C3_BowPt/hrnet/dark_hrnet.yml
13-o Global.checkpoints=/xsw/train/model/C3_BowPt/hrnet/output/dark_hrnet_w32/best_model.pdparams

5. Use model inference to test on unlabeled data.

1.    python tools/infer.py -c /xsw/train/model/C3_BowPt/hrnet/dark_hrnet.yml
2.    -o weights=/xsw/train/model/C3_BowPt/hrnet/output/dark_hrnet/best_model.pdparams
3.    --infer_dir=/xsw/train/data/C3_BowPt_cut/poc/val/images
4.    --draw_threshold=0.2
5.    --save_txt=True
6.    --output_dir=/xsw/train/model/C3_BowPt/hrnet/output/infer_dark_hrnet

6. Validation set evaluation results.

e9775e7609d1a38d5848185bd3df1774.png

Model deployment

1. Export the propeller deployment model.

1.    python tools/export_model.py -c /xsw/train/model/C3_BowPt/hrnet/dark_hrnet.yml
2.    -o weights=/xsw/train/model/C3_BowPt/hrnet/output/dark_hrnet/best_model.pdparams
3.    --output_dir /xsw/train/model/C3_BowPt/hrnet/inference_model

2. Export the Paddle Detection model to ONNX.

1.    paddle2onnx --model_dir /xsw/train/model/C3_BowPt/hrnet/inference_model/dark_hrnet
2.    --model_filename model.pdmodel
3.    --params_filename model.pdiparams
4.    --opset_version 11
5.    --save_file /xsw/train/model/C3_BowPt/hrnet/dark_hrnet.onnx

3. The inference time is shown in the table below.

GPU:IntelHDGraphics530

CPU:Intel(R)Core(TM)[email protected]

28b507dfc4f66f34a8ff903436533d5c.png

Project effect

Visualization of catenary inspection results

0152d6076f0c579f96aaafdeddfba16c.png

bird damage

4d46c6f51db35ef746110969775bc037.png

hanging off

b637fec54f16e3a5f6843e0227cabd74.png

hanging string broken

28c604e9e7f9edcb473fb9f9e98f20fb.png

foreign body

Visualization of pantograph monitoring results

3ab2b98d7a06718e8c129063dc418ea2.png

Overall effect

Up to now, the new version of the algorithm has expanded and deployed 30 high-speed trains and 16 locomotives. The cumulative line inspection in a single month exceeds 1.7 million km, and more than 3,000 defects of the above-mentioned types have been detected.

Welcome to Paddle Detection: https://github.com/ PaddlePaddle / Paddle Detection

related suggestion

f20b22d4ff09d6b0af7dc4a692f13608.gif

Pay attention to the public account of [ PaddlePaddle ]

Get more technical content~

5c022546bf88d2d43c84c29a9d2a563a.png

If you think the content is good, click " Watching "

e2d6298df620c034932faa9af130b6a4.gif

This article is shared on the blog "PaddlePaddle" (CSDN).
If there is any infringement, please contact [email protected] to delete it.
This article participates in the " OSC Yuanchuang Project ", you are welcome to join and share with us.

{{o.name}}
{{m.name}}

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324312614&siteId=291194637