Pointer instrument tilt correction method based on deep learning——interpretation of the paper

Chinese thesis title: A method for tilt correction of pointer instruments based on deep learning

English thesis title: Tilt Correction Method of Pointer Meter Based on Deep Learning

Zhou Dengke, Yang Ying, Zhu Jie, Wang Ku.A pointer instrument tilt correction method based on deep learning[J].Journal of Computer-Aided Design and Graphics, 2020, 32(12):9.DOI:10.3724/SP.J .1089.2020.18288.

1. Abstract:

       Aiming at the reading errors caused by tilted meters in the automatic recognition of meter images, a fast tilt correction method for circular pointer meters based on deep learning is proposed, which can realize the tilt correction and rotation correction of meter images. This method uses convolutional neural network to extract The dial scale number is the key point at the center, and the least square method is used to fit the key point to the ellipse .

        Combined with the ellipse transformation theory, use the perspective transformation to perform the first tilt correction on the instrument image , and then calculate the rotation angle of the instrument relative to the horizontal direction according to a pair of key points symmetrical about the vertical central axis of the instrument, and use the geometric center of the fitting ellipse as the rotation The center rotates the gauge image for a 2nd correction .

        The performance of the method is verified by collecting image data in the real environment of the substation. The experimental results show that the method is more robust than the traditional method. The average relative error of the image readings of the instrument is reduced to 3.99%, and the average reference error is reduced to 0.91%, which fully shows the effectiveness of the correction method. 

        Aiming at the problems that the existing tilt correction methods of pointer meters cannot realize the tilt correction and rotation correction of the meter at the same time, and the correction process is slow and the effect is poor, this paper proposes a method for tilt correction of pointer meters based on deep learning. .

2. Algorithm detection process

The method is divided into 2 parts:

        Dial key point extraction and instrument calibration In the key point extraction of the instrument, the end-to-end deep learning algorithm YOLOv3 is used to extract the key point coordinates on the dial centered on the scale number . (You can find the diagram of the instrument on the Internet to train yourself, and the code and data disclosed by the author have not been found so far)

There are a lot of information on the Internet about the method of key point detection, so I won’t explain too much here

For example, the following reference link provides the training method and detection process of face key points

Detailed code and datasets are also provided

Face and key point detection: YOLO5Face actual combat

        Instrument correction is divided into tilt correction and rotation correction. Firstly, calculate the perspective transformation matrix according to the extracted key point coordinates, and then perform the perspective transformation to realize the first tilt correction of the instrument: then according to a pair on the image that is symmetrical to the vertical central axis of the dial Rotate the image at the key points to realize the second rotation correction of the instrument. The figure shows the frame diagram of the instrument image tilt correction in this paper.

3. Detection effect and verification

         Finally, in order to verify that the calibration method in this paper has better stability and effectiveness compared with the traditional instrument calibration method [12,13], 10 tilted instrument images collected in the real environment of the substation were selected for experimental calibration. The corrected image effect is as follows: As shown in Figure 12, the correction efficiency and time are shown in Table 3: the statistics of the efficiency consider that the corrected image has a larger scale improvement compared with the original image and can be used for instrument reading, and the correction is considered effective. As shown in Figure 12, if some images undergo perspective transformation and have greater deformation than the original image , the correction is considered invalid, such as the last 7 images shown in Figure 12b and the last 5 images shown in Figure 12c .

Four. Conclusion

        The tilt correction of the pointer meter image is an important task in the research of meter reading recognition. In view of the fact that the traditional image correction method is difficult to meet the calibration task of the meter in a complex environment, this paper proposes a pointer meter tilt correction method based on deep learning. The method uses a deep convolutional neural network to extract the key points on the dial centered on the scale number, and then realizes the tilt correction and rotation correction of the instrument image at the same time according to the key point information. The experimental results show that compared with the traditional correction method, the correction method in this paper A better instrument calibration effect can be obtained, and the accuracy of readings can be improved by identifying the corrected instrument image. The instrument images collected in substations and industrial environments will appear in various inclinations, and then identify the instrument after tilt correction by the method in this paper The image improves the reading accuracy and has practical value.

5. Expansion, instrument tilt correction method based on SIFT features (opencv python code)

import numpy as np
import cv2
from matplotlib import pyplot as plt
#参考链接
#https://www.javaroad.cn/questions/347518#toolbar-title

# FIXME: doesn't work
def deskew():
    im_out = cv2.warpPerspective(img1, M, (img2.shape[1], img2.shape[0]))
    plt.imshow(im_out, 'gray')
    plt.show()


# resizing images to improve speed
factor = 0.4
img1 = cv2.resize(cv2.imread("./img/zheng2.png", 0), None, fx=factor, fy=factor, interpolation=cv2.INTER_CUBIC)
img2 = cv2.resize(cv2.imread("./img/xie2.png", 0), None, fx=factor, fy=factor, interpolation=cv2.INTER_CUBIC)

#有专利,SURF_create,SIFT_create可以直接跑
'''
1. 卸载已有安装opencv-python:

      pip uninstall opencv-python

2. 安装opencv-contrib-python  3.2版本以下:

      pip install opencv-contrib-python==3.4.2
也可以不降低版本号,进行编译,详细流程见链接
https://blog.csdn.net/m0_50736744/article/details/129351648

'''
surf = cv2.xfeatures2d.SIFT_create()
kp1, des1 = surf.detectAndCompute(img1, None)
kp2, des2 = surf.detectAndCompute(img2, None)

FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1, des2, k=2)

# store all the good matches as per Lowe's ratio test.
good = []
for m, n in matches:
    if m.distance < 0.7 * n.distance:
        good.append(m)

MIN_MATCH_COUNT = 10
if len(good) > MIN_MATCH_COUNT:
    src_pts = np.float32([kp1[m.queryIdx].pt for m in good
                          ]).reshape(-1, 1, 2)
    dst_pts = np.float32([kp2[m.trainIdx].pt for m in good
                          ]).reshape(-1, 1, 2)

    M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)
    matchesMask = mask.ravel().tolist()
    h, w = img1.shape
    pts = np.float32([[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]).reshape(-1, 1, 2)
    dst = cv2.perspectiveTransform(pts, M)

    deskew()

    img2 = cv2.polylines(img2, [np.int32(dst)], True, 255, 3, cv2.LINE_AA)
else:
    print("Not  enough  matches are found   -   %d/%d" % (len(good), MIN_MATCH_COUNT))
    matchesMask = None

# show matching keypoints
draw_params = dict(matchColor=(0, 255, 0),  # draw  matches in  green   color
                   singlePointColor=None,
                   matchesMask=matchesMask,  # draw only    inliers
                   flags=2)
img3 = cv2.drawMatches(img1, kp1, img2, kp2, good, None, **draw_params)
plt.imshow(img3, 'gray')
plt.show()

Effect diagram of the above algorithm

The above code and instrument image data have been uploaded to the resource and downloaded by yourself

https://download.csdn.net/download/sunnyrainflower/88221223

#Reference link
#https://www.javaroad.cn/questions/347518#toolbar-title

Special Note

#SURF_create has a patent, it can run directly and report an error, and SIFT_create can run directly

The method of using SURF_create is as follows

1.
1. Uninstall the existing opencv-python installation:

      pip uninstall opencv-python

2. Install opencv-contrib-python version 3.2 or below:

      pip install opencv-contrib-python==3.4.2

2.
It is also possible to compile without reducing the version number. For the detailed process, see the link
https://blog.csdn.net/m0_50736744/article/details/129351648


/*------------------------------------------------ ----------------------------------
// Author: Uncle Bearded
// Copyright Statement: Please do not agree Do not reprint, there are a few pictures from the Internet, if there is any infringement, please contact to delete
---------------------------------- ---------------------------------------------------*/ 

Guess you like

Origin blog.csdn.net/sunnyrainflower/article/details/132298966