相机标定(计算机视觉作业)

老师只要求标定相机内参,所以我也只做了相机内参的标定,如果是同学请勿照搬!

作业:相机标定

标定准备:

设备:iPhone SE2020 后置相机

标定板信息:张正友棋盘格,棋盘格长:15.65cm,宽:10.92cm,如下图所示:

在这里插入图片描述

标定数据:

准备标定数据集20张照片,如下:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-56P8QY29-1664890364878)(C:\Users\李育成\AppData\Roaming\Typora\typora-user-images\image-20220911160943642.png)]

标定过程:

用自然语言描述:

  1. 先定义物点的坐标,也就是相机坐标系中的坐标,我用的这个棋盘显然是从(0,0,0)到(8,5,0),因为是在一个平面上,所以其实Z坐标就是0。
  2. 通过cv2.findChessboardCorners()找到图片坐标系中的坐标点,这就是像点坐标。
  3. 之后可以采用cv2.cornerSubPix()精细化像素点,称为亚像素点,提高精确度
  4. 最后采用的是cv2.calibrateCamera()来计算相机内参数以及畸变系数
  5. (顺带将畸变矫正也做了一下)

标定程序:

# 相机标定
import cv2
import numpy as np
import glob

# 设置寻找亚像素角点的参数,采用的停止准则是最大循环次数30和最大误差容限0.001
criteria = (cv2.TERM_CRITERIA_MAX_ITER | cv2.TERM_CRITERIA_EPS, 30, 0.001)

# 获取标定板角点的位置,6 * 9代表了黑白棋盘格交界点有6行9列
objp = np.zeros((6 * 9, 3), np.float32)
objp[:, :2] = np.mgrid[0:9, 0:6].T.reshape(-1, 2)

# 遍历每一幅棋盘格板,获取其对应的内角点数目,即 nx * ny。
# 用数组的形式来保存每一幅棋盘格板中所有内角点的三维坐标。
# 将世界坐标系建在标定板上,所有点的Z坐标全部为0。

obj_points = []  # 存储3D点
img_points = []  # 存储2D点
images = glob.glob("Photos/*.JPG")  # 读取所有图片
i = 0
for photo in images:
    img = cv2.imread(photo)
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)    # 将BGR图转化为灰度值图,提升精确度
    size = gray.shape[::-1]
    # 提取角点,这里的角点专指的是标定板上的内角点,这些角点与标定板的边缘不接触。
    ret, corners = cv2.findChessboardCorners(gray, (9, 6), None)
    # print(corners) corners代表的是像素坐标系坐标矩阵

    # ret为真表示提取角点成功
    if ret:
        obj_points.append(objp)  # 物点

        corners2 = cv2.cornerSubPix(gray, corners, (5, 5), (-1, -1), criteria)  # 在原角点的基础上寻找亚像素角点,提升角点的精细度
        # print(corners2)
        if corners2.any():
            img_points.append(corners2)
        else:
            img_points.append(corners)

        cv2.drawChessboardCorners(img, (9, 6), corners, ret)  # 绘制角点,OpenCV的绘制函数一般无返回值
        i += 1
        # 可以将绘制了角点连线的图片保存在当前目录
        # cv2.imwrite('conimg' + str(i) + '.jpg', img)
        # waitKey() 函数的功能是不断刷新图像,频率时间为delay,单位为ms
        # cv2.waitKey(10)

# print(len(img_points))
cv2.destroyAllWindows()  # 关闭所有窗口并取消分配任何相关的内存使用

# 标定
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(obj_points, img_points, size, None, None)

# print("ret:", ret)
print("内参数矩阵:\n", mtx)  # 内参数矩阵
print("畸变系数:\n", dist)  # 畸变系数   distortion cofficients = (k_1,k_2,p_1,p_2,k_3)
print("旋转向量:\n", rvecs)  # 旋转向量  # 外参数
print("平移向量:\n", tvecs)  # 平移向量  # 外参数

# --------------畸变矫正过程-----------------

img = cv2.imread(images[2])
h, w = img.shape[:2]
newcameramtx, roi = cv2.getOptimalNewCameraMatrix(mtx, dist, (w, h), 1, (w, h))  # 显示更大范围的图片(正常重映射之后会删掉一部分图像)
print(newcameramtx)
print("------------------使用undistort函数-------------------")
dst = cv2.undistort(img, mtx, dist, None, newcameramtx)
x, y, w, h = roi
dst1 = dst[y:y + h, x:x + w]  # 裁剪图片
# 可以输出畸变矫正后的图像,弯曲的线重新变成直线。
# cv2.imwrite('calibresult3.jpg', dst1)
print("裁剪后的图片大小为:", dst1.shape)

标定结果:

内参数矩阵: [[3.37260014e+03 0.00000000e+00 2.01238322e+03]
[0.00000000e+00 3.37128068e+03 1.48533305e+03]
[0.00000000e+00 0.00000000e+00 1.00000000e+00]]
畸变系数: [[ 2.78386435e-01 -1.93004751e+00 5.44507758e-04 4.20784165e-04 3.94229056e+00]]

旋转向量: (array([[ 0.00518409],
[-0.0491393 ],
[ 0.01702998]]), array([[-0.31987748],
[-0.02141452],
[-0.00746327]]), array([[-0.23901262],
[-0.4966867 ],
[-0.18099524]]), array([[-0.0424941 ],
[-0.63074781],
[ 0.08473226]]), array([[ 0.3164465 ],
[-0.4755016 ],
[ 0.13260775]]), array([[ 0.45313589],
[-0.0973904 ],
[-0.00470327]]), array([[ 0.41024254],
[ 0.05424036],
[-0.36478108]]), array([[ 0.22780115],
[ 0.48111286],
[-0.14303249]]), array([[ 0.09184346],
[ 0.41538576],
[-0.26442862]]), array([[-0.1444159 ],
[ 0.42613371],
[-0.10781683]]), array([[ 0.04158784],
[-0.06825461],
[ 1.57631742]]), array([[-0.3021039 ],
[ 0.2038158 ],
[ 1.58358896]]), array([[-0.57401741],
[-0.32766678],
[ 1.4616254 ]]), array([[-0.00452669],
[-0.9344181 ],
[ 2.62157043]]), array([[ 0.57864741],
[-0.01836581],
[ 1.51917305]]), array([[ 0.68433859],
[-0.04420174],
[ 0.97395362]]), array([[0.48602174],
[0.2604793 ],
[0.86238979]]), array([[0.31781466],
[0.30833434],
[1.18288238]]), array([[-0.42121546],
[ 0.35998703],
[ 1.5655354 ]]), array([[ 0.55521057],
[-0.62325092],
[ 1.48740812]]))

平移向量:
(array([[-3.31393802],
[-2.42553109],
[12.53142253]]), array([[-3.97440395],
[-3.48842905],
[13.10730669]]), array([[-2.7576295 ],
[-3.32731355],
[11.90250582]]), array([[-0.73788307],
[-2.78164317],
[11.65156662]]), array([[-0.27408865],
[-2.3540926 ],
[10.958821 ]]), array([[-3.55496742],
[-1.38506173],
[12.30050402]]), array([[-5.80367284],
[ 0.91502717],
[13.68876937]]), array([[-5.39478269],
[-1.20010392],
[15.74511719]]), array([[-7.19416372],
[-0.88533999],
[16.3039844 ]]), array([[-4.78438777],
[-2.29856947],
[15.25392412]]), array([[ 3.21820632],
[-3.76776554],
[12.75827329]]), array([[ 3.08491139],
[-4.83814376],
[15.11836867]]), array([[ 3.03228442],
[-4.31016566],
[16.68449193]]), array([[ 5.17037729],
[ 1.1803914 ],
[13.40430311]]), array([[ 2.27031947],
[-2.75114923],
[12.23355288]]), array([[-0.46623625],
[-2.65448346],
[11.23344642]]), array([[-2.00219289],
[-3.2508639 ],
[14.01714495]]), array([[-0.65135842],
[-4.32589509],
[14.24001887]]), array([[ 3.46761168],
[-3.66166824],
[16.44599573]]), array([[ 3.63193431],
[-2.48862969],
[10.18628943]]))

猜你喜欢

转载自blog.csdn.net/qq_20184333/article/details/127166739