Panoramic Image Distortion Correction

1 Introduction

An ideal camera basically uses pinhole imaging. In the pinhole imaging model, if the focal length is constant, the area of ​​the pixel plane of the image sensor directly determines the size of the camera’s field of view. Objects beyond this field of view will not be captured. The lens is acquired. Therefore, the camera based on the lens imaging principle cannot have a large enough field of view, and the horizontal field of view is generally less than 140°.

But in some fields, such as meteorological science, scientific and technological workers need to observe changes in the sky and astronomical phenomena, and need a camera that can capture the entire hemispherical sky at one time. In the field of security monitoring, the security team expects a camera that can capture the entire monitoring area at one time from a bird's-eye view. In order to achieve these goals, the camera needs to have a horizontal field of view of 180° or even larger.

Just when the researchers were thinking hard, bionics stood up without hesitation at this time. Scientists have discovered that when the fish's eyes look up, they can see the entire hemispherical space above the water. After careful investigation, scientists found that because the refractive index of water is higher than that of air, light will be refracted after entering water from air, and the angle of refraction is smaller than the angle of incidence. At the same time, as the angle of incidence increases, the extent to which the angle of refraction becomes smaller also increases. Based on this characteristic, objects in a 180° hemispherical space on the water surface can be distorted and compressed into a limited imaging plane.
 

cc86a94cf53de5a2320bdda0e2a2dab5.png

Although this greatly increases the field of view, and the angle that can be seen is larger, it also causes the problem of image distortion.

2. Image distortion

The imaging process of the camera is essentially the conversion of the coordinate system. First, the points in the space are converted from the "world coordinate system" to the "camera coordinate system", and then projected to the imaging plane (image physical coordinate system), and finally the data on the imaging plane is converted to the image pixel coordinate system. However, due to the deviation of lens manufacturing precision and assembly process, distortion will be introduced, resulting in distortion of the original image. Lens distortion is divided into radial distortion and tangential distortion. see:

http://blog.csdn.net/dcrmg/article/details/52950141

http://blog.csdn.net/waeceo/article/details/50580808

Since the tangential distortion is due to the deviation of the assembly process, most of our main solutions are the correction of the radial distortion of the image.

3. Distortion Correction

If you want to correct the distortion of the image, you need to know several parameters, which are the internal parameters of the camera, including the focal length, imaging center and distortion parameters of the camera; the external parameters of the camera, including the rotation matrix and translation matrix. If you know the internal parameters of the camera in detail, then image correction is easy, but in most cases you do not know, so you need to get the internal and external parameters of the camera through camera calibration.

Now the most commonly used method is Zhang Zhengyou's calibration method. If you don't know it, just search it and you will find out. Those who are capable can understand the specific principles, but the most important thing is to know how to use it.

The most commonly used is to calibrate the camera parameters with a checkerboard. The steps are as follows:

1. A checkerboard is required, the higher the accuracy, the better, and it is best that the entire checkerboard is in the same plane without bumps.

2. Shoot the checkerboard from multiple perspectives. The checkerboard must appear in all the images, preferably in various positions of the image. For example, the checkerboard appears in the upper left corner of the image in one image, and in the upper right corner in one image. It is best to be able to capture 10-15 images.

3. Use the package that comes with opencv to detect the corners of the checkerboard, and obtain the coordinates objpoint of the lower corners in the world coordinate system and the pixel coordinates imgpoint of the lower corners in the pixel coordinate system.

4. Use objpoint and imgpoint to calibrate the camera parameters to obtain the camera's internal parameters, distortion coefficients, rotation matrix and translation matrix.

5. Use the obtained camera parameters to correct the distortion of the image.

4. Distortion correction code

# coding:utf-8
import cv2
import numpy as np
import glob

# 找棋盘格角点
# 阈值
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# 棋盘格模板规格
w = 9
h = 6
# 世界坐标系中的棋盘格点,例如(0,0,0), (1,0,0), (2,0,0) ....,(8,5,0),去掉Z坐标,记为二维矩阵
objp = np.zeros((w * h, 3), np.float32)
objp[:, :2] = np.mgrid[0:w, 0:h].T.reshape(-1, 2)
# 储存棋盘格角点的世界坐标和图像坐标对
objpoints = []  # 在世界坐标系中的三维点
imgpoints = []  # 在图像平面的二维点

images = glob.glob('D:images\\*.jpg')
for fname in images:
    img = cv2.imread(fname)
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    # 找到棋盘格角点
    ret, corners = cv2.findChessboardCorners(gray, (w, h), None)
    # 如果找到足够点对,将其存储起来
    if ret == True:
        cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria)
        objpoints.append(objp)
        imgpoints.append(corners)
        # 将角点在图像上显示
        cv2.drawChessboardCorners(img, (w, h), corners, ret)
        cv2.imshow('findCorners', img)
        # cv2.imwrite('D:images\\grid_out.png', img)
        cv2.waitKey(1)
cv2.destroyAllWindows()

# 标定
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
# print(mtx)
# print(dist)
# print(rvecs)
# print(tvecs)
# 去畸变
img2 = cv2.imread('D:images\\10.jpg')
h, w = img2.shape[:2]
newcameramtx, roi = cv2.getOptimalNewCameraMatrix(mtx, dist, (w, h), 1, (w, h))  # 自由比例参数
dst = cv2.undistort(img2, mtx, dist, None, newcameramtx)
# 根据前面ROI区域裁剪图片
# x,y,w,h = roi
# dst = dst[y:y+h, x:x+w]
cv2.imwrite('D:images\\grid_out.png', dst)

# 反投影误差
total_error = 0
for i in range(len(objpoints)):
    imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
    error = cv2.norm(imgpoints[i], imgpoints2, cv2.NORM_L2) / len(imgpoints2)
    total_error += error
print("total error: ", total_error / len(objpoints))

# 校正视频
cap = cv2.VideoCapture('D:video\\video.mp4')
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
fps = int(cap.get(cv2.CAP_PROP_FPS))
frame_size = (width, height)
video_writer = cv2.VideoWriter('D:video\\result2.mp4', cv2.VideoWriter_fourcc(*"mp4v"), fps, frame_size)
for frame_idx in range(int(cap.get(cv2.CAP_PROP_FRAME_COUNT))):
    ret, frame = cap.read()
    if ret:
      image_ = cv2.undistort(frame, mtx, dist, None, newcameramtx)
      cv2.imshow('jiaozheng', image_)
      # gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
      video_writer.write(image_)
    if cv2.waitKey(10) & 0xFF== ord('q'):
        break
cap.release()
# cv2.destroyALLWindows()

5. Other supplements

If there is no calibration board or it is inconvenient to use the calibration board, if we want to calibrate the camera parameters, we can get objpoint without corner detection, and we can mark it manually. It is best to put some markers in the image, preferably in the form of a rectangular table. , such as 4x4. Although there are some errors in this calibration, it can also have good results.

Guess you like

Origin blog.csdn.net/Orange_sparkle/article/details/130102267