Calibration summary of SLAM sensors: Camera/IMU/LiDAR


The internal reference calibration of the camera and IMU, the joint calibration method between the camera, IMU and LiDAR, the installation environment of the toolkit is in the Ubuntu20.04 environment, and the tools used include: MATLAB toolbox, ROS calibration toolkit, kalibr, OpenCV , imu_utils, calibration_camera_lidar, lidar_imu_calib

1. Calibration of camera (monocular) internal parameters

1.1 Solution 1: MATLAB Toolbox

This is the most convenient, refer to ( article ), open matlab, find APP, drop down the triangle to view more options and find Camera Calibrator (also provides a program for dual-target setting, here is a demonstration of monocular):

insert image description here

Add the picture of the calibration board that has been taken (it can be the image sequence saved in plan 2). When shooting the calibration board, it is best to occupy more than half of the screen, with translation, front, back, left, and right, and rotation.

insert image description here

Enter the actual distance between adjacent checkerboards in the calibration board. The calibration accuracy largely depends on the accuracy of the calibration board. Here it is 35mm. After clicking OK, the program automatically extracts the corner points and matches them. The results are as follows:

insert image description here

Select the calibrated distortion parameter type, both radial distortion and tangential distortion, and then click calibrate to calculate.

insert image description here

On the right, the average reprojection error per frame and the average reprojection error over all frames are shown, along with a visualization of the pose for each image frame. You can see that the reprojection error of two frames is obviously too large. Click to view the image with motion blur. You can right-click on the image of the corresponding frame on the left to delete, and then recalculate. The average reprojection error of the final result is 0.16 pixels, and the accuracy is already very high. Taller.

insert image description here

Click Output Camera Calibration Parameters to view the calibration parameter variables:

insert image description here

1.2 Solution 2: Use the ROS calibration toolkit

Refer to section 7.1 of my article ( link ).

1.3 Solution 3: Use the calibration tool kalibr

Kalibr provides the functions of multi-IMU calibration, camera IMU calibration, and multi-camera calibration. Here, the multi-camera calibration command is used to calibrate a single camera.

1.3.1 install kalibr

# for Ubuntu18.04
# 安装依赖
sudo apt update
sudo apt-get install python-setuptools python-rosinstall ipython libeigen3-dev libboost-all-dev doxygen libopencv-dev ros-melodic-vision-opencv ros-melodic-image-transport-plugins ros-melodic-cmake-modules python-software-properties software-properties-common libpoco-dev python-matplotlib python-scipy python-git python-pip ipython libtbb-dev libblas-dev liblapack-dev python-catkin-tools libv4l-dev
sudo pip install python-igraph --upgrade
# 创建工作空间并下载源码
mkdir -p kalibr_ws/src
cd kalibr_ws/src
git clone https://github.com/ethz-asl/kalibr.git
# 编译
cd .. 
source /opt/ros/melodic/setup.bash
catkin init
catkin config --extend /opt/ros/melodic
catkin config --merge-devel # Necessary for catkin_tools >= 0.4.
catkin config --cmake-args -DCMAKE_BUILD_TYPE=Release
catkin_make -DCMAKE_BUILD_TYPE=Release -j4
# for Ubuntu20.04
# 安装依赖
sudo apt update
sudo apt-get install python3-setuptools python3-rosinstall ipython3 libeigen3-dev libboost-all-dev doxygen libopencv-dev ros-noetic-vision-opencv ros-noetic-image-transport-plugins ros-noetic-cmake-modules python3-software-properties software-properties-common libpoco-dev python3-matplotlib python3-scipy python3-git python3-pip libtbb-dev libblas-dev liblapack-dev libv4l-dev python3-catkin-tools python3-igraph libsuitesparse-dev libgtk-3-dev
# 下面两个可能没反映或者错误,请等待和多尝试几次
pip3 install attrdict
pip3 install wxPython
# 创建工作空间并下载源码
mkdir -p kalibr_ws/src
cd kalibr_ws/src
git clone https://github.com/ethz-asl/kalibr.git
# 编译
cd .. 
source /opt/ros/noetic/setup.bash
catkin init
catkin config --extend /opt/ros/noetic
catkin config --merge-devel # Necessary for catkin_tools >= 0.4.
catkin config --cmake-args -DCMAKE_BUILD_TYPE=Release
catkin_make -DCMAKE_BUILD_TYPE=Release -j8

1.3.2 Prepare the calibration plate

Prepare the Aprilgrid calibration board, which is the calibration board recommended by kalibr, as shown in the figure below. You can go to the official website to download the calibration board provided by the author, but you need to access the Internet scientifically, so you can use the author's function package to generate:

rosrun kalibr kalibr_create_target_pdf --type apriltag --nx [NUM_COLS] --ny [NUM_ROWS] --tsize [TAG_WIDTH_M] --tspace [TAG_SPACING_PERCENT]
eg: rosrun kalibr kalibr_create_target_pdf --type apriltag --nx 8 --ny 8 --tsize 0.1 --tspace 0.3
# 作者给的:
eg: rosrun kalibr kalibr_create_target_pdf --type apriltag --nx 6 --ny 6 --tsize 0.088 --tspace 0.3

If an error is reported: No module named 'pyx', refer to the issues of github to solve:

sudo apt install python-pyx #18.04
sudo apt install python3-pyx #20.04

A pdf file will be generated in the current directory:

insert image description here

1.3.3 Calibration method

(1) First record data, drive your own camera, translate up, down, left, right, back and forth and rotate three times each when recording data, and finally there is random movement to enhance robustness.

rosbag record /image_raw -O images.bag

The frame rate recommended by kalibr is 4Hz. Here is a frame drop command, which can drop the old_topic in the original rosbag to a 4hz new_topic, and then record new_topic while playing rosbag:

rosrun topic_tools throttle messages old_topic 4.0 new_topic

(2) Then you can calibrate

source devel/setup.bash
rosrun kalibr kalibr_calibrate_cameras --target april_6x6_24x24mm.yaml --bag images.bag --bag-from-to 5 20 --models pinhole-radtan --topics /image_raw --show-extraction

Among them, several parameter files need to pay attention to:

  • --bag images.bag: Recorded packet path;
  • --bag-from-to 5 20It refers to reading from the 5th s to the 20th s of the bag, which is to cut off a part of the beginning and the end, because there may be some violent jitters at the beginning and the end, which will cause errors. If the whole process is smooth, you can not cut it off ;
  • --target april_6x6_24x24mm.yaml: Calibration board parameter file, in which the author of the two parameters tagSize and tagSpacing specifically explained the meaning, modify according to the calibration board printed by yourself
target_type: 'aprilgrid'   #gridtype
tagCols: 6              #number of apriltags
tagRows: 6             #number of apriltags
tagSize: 0.088          #size of apriltag, edge to edge [m]
tagSpacing: 0.3         #ratio of space between tags to tagSize
codeOffset: 0           #code offset for the first tag in the aprilboard

insert image description here

  • --models pinhole-radtan: Camera model, here is the pinhole camera model, pinhole-radtan is the most commonly used camera model, including radial distortion and tangential distortion, for more details, please refer to the camera model description given by the author ;
  • --topics /image_raw: image topic;
  • --show-extraction: You can display the image and see the extracted feature points.

After the calibration is completed, the drawing result will pop up automatically (the data I used is not correct, the reprojection error is a bit large, so I need to record a few more data):

insert image description here

insert image description here

Simultaneously generate the following result files:

  • Camera internal parameters:
cam0:
  cam_overlaps: []
  camera_model: pinhole
  distortion_coeffs: [-0.294719325148729, 0.08601391290708914, -0.0003238494917967103, -0.00041999959826033857]
  distortion_model: radtan
  intrinsics: [461.14152593310064, 459.7917242849438, 363.2138823275693, 235.61324563304655]
  resolution: [752, 480]
  rostopic: /cam0/image_raw 
  • txt file of calibration results;
  • A pdf of the plot of the calibration results.

1.4 Solution 4: Write a program to call OpenCV for calibration

Refer to the book "OpenCV4 Quick Start"

#include <opencv2/opencv.hpp>
#include <fstream>
#include <iostream>
#include <vector>
#include <cstring>
#include <sstream>
#include <cstdlib>
using namespace std;
using namespace cv;

void initUndistAndRemap(vector<Mat> imgs, Mat cameraMatrix, Mat distCoeffs, Size imageSize, vector<Mat> &undistImgs)
{
    
    
    // 计算映射坐标矩阵
    Mat R = Mat::eye(3, 3, CV_32F);
    Mat mapx = Mat(imageSize, CV_32FC1);
    Mat mapy = Mat(imageSize, CV_32FC1);
    // 内参矩阵/畸变系数/...
    initUndistortRectifyMap(cameraMatrix, distCoeffs, R, cameraMatrix, imageSize, CV_32FC1, mapx, mapy);
    for (int i = 0; i < imgs.size(); i++)
    {
    
    
        Mat undistImg;
        remap(imgs[i], undistImg, mapx, mapy, INTER_LINEAR);
        undistImgs.push_back(undistImg);
    }
}

void undist(vector<Mat> imgs, Mat cameraMatrix, Mat distCoeffs, vector<Mat> &undistImgs)
{
    
    
    for (int i = 0; i < imgs.size(); i++)
    {
    
    
        Mat undistImg;
        // 单幅图像去畸变:畸变图像/去畸变后的图像/内参矩阵/畸变系数
        undistort(imgs[i], undistImg, cameraMatrix, distCoeffs);
        undistImgs.push_back(undistImg);
    }
}

bool LoadData(const string &imagePath, const string &imageFilename, vector<Mat> &imgs)
{
    
    
    ifstream Left;
    Left.open(imageFilename.c_str());
    while (!Left.eof())
    {
    
    
        string s;
        getline(Left, s);
        if (!s.empty())
        {
    
    
            stringstream ss;
            ss << s;
            double t;
            string imageName;
            ss >> t;
            ss >> imageName;
            Mat img = imread(imagePath + "/" + imageName);
            imgs.push_back(img);
            if (!img.data)
            {
    
    
                cout << "请输入正确的图像文件" << endl;
                return 0;
            }
        }
    }

    return 1;
}

int main(int argc, char **argv)
{
    
    
    if (argc != 6)
    {
    
    
        cerr << endl
             << "Usage: ./CameraCalibration path_to_CalibrateImage path_to_calibdata.txt board_size_cols board_size_rows corners_of_checkerboard(mm)" << endl
             << "eg: ./CameraCalibration ../CalibrateData ../CalibrateData/calibdata.txt 9 6 10" << endl;

        return 1;
    }

    // -1 读取数据
    vector<Mat> imgs;
    LoadData(argv[1], argv[2], imgs);
    // 棋盘格内角点行列数
    int bcols, brows;
    // 棋盘格每个方格的真实尺寸
    double side_length;
    stringstream ss;
    ss << argv[3];
    ss >> bcols;
    ss.clear();
    ss << argv[4];
    ss >> brows;
    ss.clear();
    ss << argv[5];
    ss >> side_length;
    Size board_size = Size(bcols, brows);

    // -2 提取并计算标定板角点像素坐标
    // 多副图像分别放入vector<vector<Point2f>>
    vector<vector<Point2f>> imgsPoints;
    for (int i = 0; i < imgs.size(); i++)
    {
    
    
        Mat img1 = imgs[i], gray1;
        cvtColor(img1, gray1, COLOR_BGR2GRAY);
        vector<Point2f> img1_points;
        // 第一个参数是输入的棋盘格图像
        // 第二个参数是棋盘格内部的角点的行列数(注意:不是棋盘格的行列数,而内部角点-不包括边缘-的行列数)
        // 第三个参数是检测到的棋盘格角点,类型为std::vector<cv::Point2f>
        bool ret = findChessboardCorners(gray1, board_size, img1_points);

        // 细化标定板角点坐标(亚像素),Size(5, 5)为细化方格坐标领域范围
        find4QuadCornerSubpix(gray1, img1_points, Size(5, 5));

        // 第一个参数是棋盘格图像(8UC3)既是输入也是输出
        // 第二个参数是棋盘格内部角点的行、列
        // 第三个参数是检测到的棋盘格角点
        // 第四个参数是cv::findChessboardCorners()的返回值。
        drawChessboardCorners(img1, board_size, img1_points, ret);
        // string windowNumber = to_string(i);
        // imshow("chessboard corners"+windowNumber, img1);

        imgsPoints.push_back(img1_points);
    }

    // -3 使用棋盘格每个内角点的世界坐标
    vector<vector<Point3f>> objectPoints; // 空间三维坐标(位于一个平面内,以此为xy坐标平面)
    for (int i = 0; i < imgsPoints.size(); i++)
    {
    
    
        vector<Point3f> tempPointSet;
        for (int j = 0; j < board_size.height; j++)
        {
    
    
            for (int k = 0; k < board_size.width; k++)
            {
    
    
                // 假设标定板为世界坐标系的z平面,即z=0
                Point3f realPoint;
                realPoint.x = j * side_length;
                realPoint.y = k * side_length;
                realPoint.z = 0;
                tempPointSet.push_back(realPoint);
            }
        }
        objectPoints.push_back(tempPointSet);
    }

    // -4 内参及畸变标定
    Size imageSize; // 图像尺寸
    imageSize.width = imgs[0].cols;
    imageSize.height = imgs[0].rows;
    // 定义内外参
    Mat cameraMatrix = Mat(3, 3, CV_32FC1, Scalar::all(0)); // 相机内参数矩阵
    Mat distCoeffs = Mat(1, 5, CV_32FC1, Scalar::all(0));   // 相机的5个畸变系数
    vector<Mat> rvecs, tvecs;                               // 每幅图像的旋转向量/平移向量
    // 调用OpenCV标定函数
    calibrateCamera(objectPoints, imgsPoints, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs, 0);
    cout << "相机的内参矩阵K=" << endl
         << cameraMatrix << endl;
    cout << "相机畸变系数:" << endl
         << distCoeffs << endl;

    // -5 保存数据
    ofstream calibrateFile;
    // 写入数据:
    calibrateFile.open("../calibrateCamera.txt"); // 没有自动创建
    if (!calibrateFile.is_open())                 // 文件是否打开
    {
    
    
        exit(EXIT_FAILURE); // 终止程序
    }
    calibrateFile << fixed;     // 开始写入,与cout相同
    calibrateFile.precision(5); // 写入小数精度
    calibrateFile << "cameraMatrix K=" << endl
                  << cameraMatrix; // 写入数据(覆盖
    calibrateFile << endl << "distCoeffs=" << endl
                  << distCoeffs;
    calibrateFile.close();

    // -6 图像去畸变
    vector<Mat> undistImgs;
    // 使用initUndistortRectifyMap()函数和remap()函数校正图像
    initUndistAndRemap(imgs, cameraMatrix, distCoeffs, imageSize, undistImgs); // 畸变图像/前文计算得到的内参矩阵/畸变系数/图像尺寸/去畸变后的图像,自定义函数是为了处理多副图像
    // undist(imgs, cameraMatrix, distCoeffs, undistImgs);//用undistort()函数直接计算校正图像,自定义函数是为了处理多副图像
    // 显示校正前后的图像(一张示例)
    for (int i = 0; i < 1; i++)
    {
    
    
        string windowNumber = to_string(i);
        imshow("chessboard corners without undistort -- image" + windowNumber, imgs[i]);
        imshow("chessboard corners with undistort -- image" + windowNumber, undistImgs[i]);
        imwrite("../chessboard corners without undistort.png", imgs[i]);
        imwrite("../chessboard corners with undistort.png", undistImgs[i]);
    }
    waitKey(0);

    // -7 单目投影(重投影):根据成像模型及空间点三位坐标计算图像二维坐标
    vector<vector<Point2f>> imagePoints; // 存放二维坐标
    // 根据三维坐标和相机与世界坐标系时间的关系估计内角点像素坐标
    for (int i = 0; i < imgs.size(); i++)
    {
    
    
        Mat rvec = rvecs[i], tvec = tvecs[i];
        vector<Point3f> PointSets = objectPoints[i];
        vector<Point2f> imagePoint;                                                 // 存放二维坐标
        projectPoints(PointSets, rvec, tvec, cameraMatrix, distCoeffs, imagePoint); // 输入三维点/旋转及平移向量/前文计算得到的内参矩阵和畸变矩阵/输出二维点
        imagePoints.push_back(imagePoint);
    }

    // -8 计算重投影误差
    vector<vector<double>> ReProjectionError;
    vector<vector<double>> ReProjectionErrorX;
    vector<vector<double>> ReProjectionErrorY;
    double e = 0.0;
    for (int i = 0; i < imgs.size(); i++)
    {
    
    
        vector<Point2f> imagePoint = imagePoints[i]; // 存放二维坐标
        vector<double> er;
        vector<double> erX;
        vector<double> erY;
        for (int j = 0; j < imagePoint.size(); j++)
        {
    
    
            double eX = imagePoint[j].x - imgsPoints[i][j].x;
            double eY = imagePoint[j].y - imgsPoints[i][j].y;
            double error = sqrt(pow(eX, 2) + pow(eY, 2));
            erX.push_back(eX);
            erY.push_back(eY);
            er.push_back(error);
            e += error;
        }
        ReProjectionError.push_back(er);
        ReProjectionErrorX.push_back(erX);
        ReProjectionErrorY.push_back(erY);
    }
    // 计算估计值和图像中计算的真实时之间的平均误差
    cout << "平均重投影误差:" << e / (imagePoints[0].size() * imgs.size()) << endl;

    // -9 保存重投影误差数据
    ofstream ReProjectionErrorFile;
    ReProjectionErrorFile.open("../ReProjectionError.txt");
    ofstream ReProjectionErrorFileX;
    ReProjectionErrorFileX.open("../ReProjectionErrorX.txt");
    ofstream ReProjectionErrorFileY;
    ReProjectionErrorFileY.open("../ReProjectionErrorY.txt");
    if (!ReProjectionErrorFile.is_open() || !ReProjectionErrorFileX.is_open() || !ReProjectionErrorFileY.is_open())
    {
    
    
        exit(EXIT_FAILURE);
    }
    ReProjectionErrorFile << fixed;
    ReProjectionErrorFile.precision(5);
    ReProjectionErrorFileX << fixed;
    ReProjectionErrorFileX.precision(5);
    ReProjectionErrorFileY << fixed;
    ReProjectionErrorFileY.precision(5);
    for (int i = 0; i < imgs.size(); i++)
    {
    
    
        for (int j = 0; j < imagePoints[i].size(); j++)
        {
    
    
            ReProjectionErrorFile << ReProjectionError[i][j] << " ";
            ReProjectionErrorFileX << ReProjectionErrorX[i][j] << " ";
            ReProjectionErrorFileY << ReProjectionErrorY[i][j] << " ";
        }
        ReProjectionErrorFile << endl;
        ReProjectionErrorFileX << endl;
        ReProjectionErrorFileY << endl;
    }
    ReProjectionErrorFile.close();
    ReProjectionErrorFileX.close();
    ReProjectionErrorFileY.close();

    /*
    // 圆形标定板角点提取
    Mat img2 = imread("../CalibrateData/circle.png");
    Mat gray2;
    cvtColor(img2, gray2, COLOR_BGR2GRAY);
    Size board_size2 = Size(7, 7);          //圆形标定板圆心数目(行,列)
    vector<Point2f> img2_points;//检测角点,单副图像放入vector<Point2f>
    findCirclesGrid(gray2, board_size2, img2_points);           //计算圆形标定板检点
    find4QuadCornerSubpix(gray2, img2_points, Size(5, 5));      //细化圆形标定板角点坐标
    drawChessboardCorners(img2, board_size2, img2_points, true);
    imshow("圆形标定板角点检测结果", img2);
    waitKey();
    */

    return 0;
}

Take the data in the book as an example:

insert image description here

You can see the difference before and after distortion correction, and the straight lines in the image are no longer deformed

original image

insert image description here
Plot the reprojection error using Python:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# 编     写:ZARD帧心
import numpy as np
import matplotlib.pyplot as plt

errorX = np.genfromtxt('ReProjectionErrorX.txt')  # 将文件中数据加载到数组里
errorY = np.genfromtxt('ReProjectionErrorY.txt')
(m, n) = np.shape(errorX)
error = (errorX**2 + errorY**2)**0.5

# 画每帧图像的重投影误差均值
error_image =  np.average(error, axis=1)  # 按行求均值
image = range(m)
plt.bar(image, error_image)
plt.xlabel('Image number',{
    
    'Size':30})
plt.ylabel('ReProjection Error (pixel)',{
    
    'Size':30})
plt.xticks(fontsize=30) #x轴刻度字体大小
plt.yticks(fontsize=30) #y轴刻度字体大小
plt.show()

# 画所有角点的重投影误差
x = errorX.reshape(m*n,1)
y = errorY.reshape(m*n,1)
r = error.reshape(m*n,1) # 用于映射颜色的数值
# np.set_printoptions(threshold=np.inf)
# print(r)
# print(max(r))
h = plt.scatter(x, y, c=r, marker='*', alpha=0.8, cmap='viridis')
plt.xlabel('X error (pixel)',{
    
    'Size':30})
plt.ylabel('Y error (pixel)',{
    
    'Size':30})
plt.xticks(fontsize=30) #x轴刻度字体大小
plt.yticks(fontsize=30) #y轴刻度字体大小
cb=plt.colorbar(h) # 显示颜色条
font = {
    
    'weight' : 'normal',
        'size'   : 30, }
cb.ax.tick_params(labelsize=30)  #设置色标刻度字体大小。
cb.set_label('ReProjection Error',fontdict=font) #设置colorbar的标签字体及其大小
plt.show()

The following is the result of drawing the same data as Scheme 1 MATLAB, which is far from MATLAB. It may be that there is a problem with my code writing. If anyone sees where it needs to be modified, please let me know, thank you
insert image description here

insert image description here

Later, I used a simple method to eliminate gross errors and got the following results, which are better but still far from MATLAB

insert image description here

insert image description here

2. Calibration of IMU internal parameters

Use imu_utils to calibrate the IMU, refer to: Article
(1) Install dependencies

sudo apt-get install libdw-dev

(2) Download imu_utils and code_utils , but gaowenliang's imu_utils has unit problems, use the mintar version of imu_utils
(3) Pay attention to a few issues before compiling:

  • code_utils depends on eigen, OpenCV3 , ceres , these are links to my previous articles, and there are their installation tutorials;
  • Do not compile imu_utils and code_utils at the same time, imu_utils depends on code_utils, so compile code_utils first and then compile imu_utils.

(4) Put it in your workspace and compile:

catkin_make
  • Compilation error: code_utils-master/src/sumpixel_test.cpp:2:24: fatal error: backward.hpp:No such file or directory, found under code_utils sumpixel_test.cpp, modify
// #include "backward.hpp"
// 为 
#include “code_utils/backward.hpp”
  • If you encounter a lot of errors in the C++ standard library, first think of trying to modify the C++ standard:
# set(CMAKE_CXX_FLAGS "-std=c++11")
set(CMAKE_CXX_FLAGS "-std=c++14")

(4) Record imu.bag and let the IMU stand still for 30-120 minutes, the longer the better, record the bag of the IMU

rosbag record /imu -O imu.bag

(5) Calibrate the IMU, here is the sample data of imu_utils as an example:

rosbag play -r 200 imu_utils/imu.bag  #(这里要写你录制的包的路径,200倍速播放)
source ./devel/setup.bash
roslaunch imu_utils A3.launch #示例数据

Note that the file here A3.launchcontains some parameters, such as IMU topic and name, and calibration duration, modify as needed:

<launch>
    <node pkg="imu_utils" type="imu_an" name="imu_an" output="screen">
        <param name="imu_topic" type="string" value= "/djiros/imu"/>
        <param name="imu_name" type="string" value= "A3"/>
        <param name="data_save_path" type="string" value= "$(find imu_utils)/data/"/>
        <param name="max_time_min" type="int" value= "150"/>
        <param name="max_cluster" type="int" value= "100"/>
    </node>
</launch>

After the calibration is completed, mynteye_imu_param.yaml and multiple txt files will be generated under $(find imu_utils)/data/:
insert image description here

3. Joint calibration of camera and IMU

Use the calibration tool kalibr, which has been installed before. The recommended frequency given by other tutorials is 20Hz for the camera and 200Hz for the IMU. Since I can't surf the Internet scientifically, I can't download the sample data given by the author. The data here is obtained from this blog: Ubuntu18.04+kalibr calibration toolbox installation and compilation
(1) Recording
Subscribe to two news, images and imu topics:

rosbag record /imu_raw /image_raw

When recording, try to move as smoothly as possible. If the speed is too fast, the image quality obtained by the camera is too poor, and it should not be too slow. To stimulate each axis of the imu, the author provides a demonstration video for data collection reference. and rotation three times each, and a period of irregular movement (be careful not to exercise too vigorously). The calibration board is completely printed, and the camera distance is recommended to be 0.8m when recording data packets

insert image description here

insert image description here

(2) Calibration

source devel/setup.bash
rosrun kalibr kalibr_calibrate_imu_camera --target dynamic/april_6x6.yaml --bag dynamic/dynamic.bag --cam dynamic/camchain_mono.yaml --imu dynamic/imu_adis16448.yaml --bag-from-to 5 45 --imu-models scale-misalignment --timeoffset-padding 0.1 --show-extraction

Similar to 1.3, parameters:

  • --dynamic/dynamic.bag: Recorded packet path;
  • --bag-from-to 5 45Read from bag 5s to 45s;
  • --target april_6x6.yaml: Calibration board parameter file;
target_type: 'aprilgrid'   #gridtype
tagCols: 6              #number of apriltags
tagRows: 6             #number of apriltags
tagSize: 0.088          #size of apriltag, edge to edge [m]
tagSpacing: 0.3         #ratio of space between tags to tagSize
codeOffset: 0           #code offset for the first tag in the aprilboard
  • --cam dynamic/camchain.yaml: The internal reference of the camera calibrated before (directly using this data is not correct), if it is not calibrated with kalibr, you may need to modify a file according to the following format:
cam0:
  cam_overlaps: [1, 3]
  camera_model: pinhole
  distortion_coeffs: [-0.0016509958435871643, 0.02437222940989351, -0.03582816956989852,
    0.019860839087717054]
  distortion_model: equidistant
  intrinsics: [461.487246372674, 460.1113992557959, 356.39105303227853, 231.15719697054647]
  resolution: [752, 480]
  rostopic: /cam0/image_raw
  • --imu dynamic/imu_adis16448.yaml: Use the IMU internal reference calibrated by the previous method, fill in the following format
rostopic: /imu0
update_rate: 200.0 #Hz

accelerometer_noise_density: 0.01 #continous
accelerometer_random_walk: 0.0002

gyroscope_noise_density: 0.005 #continous
gyroscope_random_walk: 4.0e-06
  • --imu-models scale-misalignment: Parametric model of the IMU
  • --timeoffset-padding 0.1: The larger the value, the longer the calibration run time will be

After the calibration is completed, the result drawing will automatically pop up:
insert image description here
at the same time, several result files are generated in the data folder, including:

  • The time synchronization difference between the internal and external parameters of the camera and the imu (the delay of the IMU relative to the camera):
cam0:
  T_cam_imu:
  - [0.01676354570202121, 0.9998590327410452, -0.000947724452812317, 0.06843047675330072]
  - [-0.9998594579273542, 0.016763745309816058, 0.0002030674908090371, -0.014939612219976138]
  - [0.00021892627629230532, 0.0009441871264906492, 0.999999530290868, -0.0038646995667698156]
  - [0.0, 0.0, 0.0, 1.0]
  cam_overlaps: [1, 3]
  camera_model: pinhole
  distortion_coeffs: [-0.0016509958435871643, 0.02437222940989351, -0.03582816956989852, 0.019860839087717054]
  distortion_model: equidistant
  intrinsics: [461.487246372674, 460.1113992557959, 356.39105303227853, 231.15719697054647]
  resolution: [752, 480]
  rostopic: /cam0/image_raw
  timeshift_cam_imu: 5.9040012306710654e-05
  • IMU internal and external parameters:
imu0:
  T_i_b:
  - [1.0, 0.0, 0.0, 0.0]
  - [0.0, 1.0, 0.0, 0.0]
  - [0.0, 0.0, 1.0, 0.0]
  - [0.0, 0.0, 0.0, 1.0]
  accelerometer_noise_density: 0.01
  accelerometer_random_walk: 0.0002
  gyroscope_noise_density: 0.005
  gyroscope_random_walk: 4.0e-06
  model: calibrated
  rostopic: /imu0
  time_offset: 0.0
  update_rate: 200.0
  • txt file of calibration results
  • The pdf document of the calibration result drawing

insert image description here

4. Joint calibration of camera and LiDAR

The calibration method is the calibration package calibration_camera_lidar separated from Autoware
(1) cmake installs nlopt normally

git clone https://github.com/stevengj/nlopt.git
cd nlopt
mkdir build && cd build
cmake ..
make
sudo make install

Be careful not to use sudo apt-get install to install, catkin_make will encounter problems later
(2) Compile calibration_camera_lidar:

catkin_make
  • The old package needs to add melodic or noetic version ROS support, modify CMakelists.txt line 72/94/114:
# if ("{ROS_VERSION}" MATCHES "(indigo|jade|kinetic)")
if ("{ROS_VERSION}" MATCHES "(indigo|jade|kinetic|melodic|noetic)")
  • If an error is reported
jsk_recognition_msgsConfig.cmake
jsk_recognition_msgs-config.cmake

Explain that the jsk-recognition-msgs package is missing for compilation, install:

sudo apt-get install ros-<ROS VERSION NAME>-jsk-recognition-msgs
  • Error: error: ‘CV_RGB’ was not declared in this scope; did you mean ‘CV_YUV2RGB’?According to the position prompt, change the CV_RGB of the error position to cvScalar
  • Error: e rror: no match for ‘operator=’ (operand types are ‘CvMat’ and ‘cv::Mat’)
    This is the problem of mutual conversion between CvMat and cv::Mat. Change the two wrong sentences to:
*m_intrinsic =  cvMat(m_intrinsic_opencv2to1);
*m_dist = cvMat(m_dist_opencv2to1);
  • Error: , Change cv_image->imagethe error to Note that on different versions, cvIplimage may be written as Iplimage, and cvMat may be written as CvMat.cv_image->imagecvIplImage(cv_image->image)

All of the above are problems with the OpenCV version. The code is written based on opencv2 and is not compatible with opencv3/4.

  • CMakeFiles/calibration_publisher.dir/nodes/calibration_publisher/calibration_publisher.cpp.o: In function void cv::operator>><cv::Mat>(cv::FileNode const&, cv::Mat&)': /usr/local/include/opencv2/core/persistence.hpp:1261: undefined reference to cv::Mat::Mat()'
    Link opencv for calibration_publisher node in CMakeLists.txt
target_link_libraries(calibration_publisher
        ${catkin_LIBRARIES}
        ${OpenCV_LIBS})

(3) Recording package: Choose to fix the car, move the calibration board, the movement of the calibration board is similar to camera calibration, relative to the front of the car, far and near, left and right, and various postures of tilt (you can choose to use Autoware or record); (4) Execution function rosbasg record TOPICpackage
:

source devel/setup.bash
# 新开一个终端执行roscore
rosrun calibration_camera_lidar calibration_toolkit

(5) Calibration method
1) Use Autoware to play and pause the data set;
2) Select the image topic after startup (choose according to the topic you published);
3) Select the calibration type: camera to Velodyne;
because the tool is based on autoware, so here The point cloud topic needs to be converted to /points_raw. If the point cloud topic is wrong, it can be modified in the following two ways:

  • Directly change the frame ID to: velodyne in the parameter configuration of the radar driver, and change the topic to: /points_raw
  • When playing through rosbag, change the topic
    . For example:
rosbag play autoware-20221006.bag /rslidar_points:=/points_raw

4) Click OK to pop up the calibration interface. The pattern size to be filled in the text box above the interface is the side length of the checkerboard on the calibration board (m), and the pattern number is the corner size (note that it is the number of internal corner points, that is, the length and width of the checkerboard grid. -1)

5) Before calibration, import the yml file calibrated by the camera’s internal parameters (need to be the result of calibration using this function package, and this step is not necessary, because the following steps will calibrate the camera’s internal parameters at the same time), click load to import the file; after the import is complete, cameramat /distcoeff /imagesize will display the corresponding internal parameter value;

6) Adjust the size of the window, click the black part with the mouse, press b to adjust the color, press 2 to switch modes, adjust the viewing angle, and display the point cloud data. LiDAR visualization adjustment method: move point cloud: rotate point cloud: switch mode:
viewing 上下左右方向键、PgUp、PgDn
angle a、d、w、s、q、e
zoom 数字1和数字2
: 减号缩小、加号放大
point Cloud Size: o键缩小点云、p放大点云(建议放大,较为清晰)
Change the point cloud window background color:b(建议背景全部调整成白色,清晰)

Pause the data set after playing a suitable frame (the calibration board is complete and clear in the image and point cloud), and click Grab to add a frame of data to the alternative calibration sequence. Then adjust the angle of the laser data in the lower right corner so that the coordinate axis hits the center of the calibration board, click the left button to mark the red center, and add it to the calibration frame. When the red mark is wrong, you can right-click to cancel the red mark. If you are not satisfied with this alternative frame, you can click Remove in the upper right corner. As shown below:

insert image description here

Mark about 30 frames, then click calibrate to start calculating the calibration results, and CameraExtrinsicMat /cameramat /distcoeff /imagesize will display the corresponding parameter values. After the result is calculated, you can click project to project the laser point on the calibration board onto the image to roughly check whether the calibration is correct or not, the red dot in the picture, and finally click Save in the upper left corner to save the calibration result (only save the joint calibration result, including the inside and outside of the camera Reference: camera coordinate system to radar coordinate system; save it as a .yml file, do not save the data used for calibration, because the data is too large, you can save it if necessary)

insert image description here

(7) After the result is saved, you can visualize the fusion result of the camera and LiDAR calibration through Autoware, play the data in Simulation, click Calibration Publisher on the Sensing interface to load the calibration result file, the lidar frame ID in the calibration data of target_frame; the visual camera frame in the calibration data of camera_frame ID; in the ref selection column, load the yml file

Click the points image again, and then open rviz. Open the menu bar Panels-Add New Panel, select the ImageViewPlugin plug-in, select the image and point cloud topic, and then continue to play the data set, you can see the fusion of the image and point cloud

5. Joint calibration of LiDAR and IMU

Use the calibration package lidar_imu_calib. After the calibration in the previous two chapters, you can directly calculate the external parameters of LiDAR and IMU, but you can also confirm whether the previous calibration is correct, or the device has no visual sensor.

5.1 Solution 1: Zhejiang University open source lidar_IMU_calib

5.2 Solution 2: lidar-align

5.3 Solution 3: lidar_imu_calib

This function package only calibrates the attitude component in the conversion between lidar and imu
(1) Download the source code from github and compile it in your own workspace:

catkin_make

If you encounter a large number of PCL errors, it is because the ROS version of ubuntu 20.04 is noetic, and its own pcl is pcl-1.10. This version needs to be compiled with c++14 or above, and modified in CMakeLists.txt (2
) Recording package: The recording environment should be in places with many plane and line features, such as buildings with rich structures, not in degraded environments (a large number of trees, shrubs or tunnels), and the car moves slowly (3)
Calibration

  • Modify the IMU, LiDAR message topic name and bag path in the Launch file:
<launch>
    <node pkg="lidar_imu_calib" type="calib_exR_lidar2imu_node" name="calib_exR_lidar2imu_node" output="screen">
        <param name="/lidar_topic" value="/velodyne_points"/>
        <param name="/imu_topic" value="/imu/data"/>
        <param name="/bag_file" value="/media/zard/SLAMData/2017-06-08-15-52-45_3.bag"/>
    </node>
</launch>
  • Execution function package:
roslaunch lidar_imu_calib calib_exR_lidar2imu.launch

The function package will display lidar messages added gradually, and finally it will display the number of lidar messages and the number of imu, forming the number of constraints, and the calculation result will appear after a while:

insert image description here

Guess you like

Origin blog.csdn.net/zardforever123/article/details/130030767