Cam_lidar_calibration of lidar and camera joint calibration

About the installation and use of cam_lidar_calibration (2021)

1. Introduction

Among the many open source programs for lidar and camera calibration, cam_lidar_calibration works relatively well. The rest of the open source programs are either complicated in the calibration process or have too many errors. The open source package has undergone some revisions. The following are open source algorithms of the cam_lidar_calibration series

time corresponding paper Algorithm analysis Open source algorithm address
2019 (most original version) Automatic extrinsic calibration between a camera and a 3D Lidar using 3D point and plane correspondences Using the checkerboard as the target, by using the geometric shape of the calibration board, calculate the midline point and normal vector in the lidar point cloud, and obtain its midline point and normal vector under the camera, and complete the calibration after optimization. https://gitlab.acfr.usyd.edu.au/sverma/cam_lidar_calibration
2020 ditto Same as above (a little modification was made according to the code of 2019, but the documentation needs to be paid in CSDN) https://github.com/AbangLZU/cam_lidar_calibration
2021 Optimising the selection of samples for robust lidar camera calibration
中文
It is also the ACFR laboratory of the University of Sydney, based on the improved version in 2019, it adopts quality variability (VOQ) to reduce the error of user selection data, overcomes the problem of overfitting, and is more robust than the calibration process of the 2019 version. https://github.com/acfr/cam_lidar_calibration

2. Code installation test

The official documentation is very detailed, providing local installation and Docker installation.

Only local Ubuntu20.04 installation is tested, take local installation as an example.

The author of the code on Github is Ubuntu18.04, and some codes and commands need to be modified according to the method on the official website. Here are the installation steps:

  1. install dependencies

    sudo apt update && sudo apt-get install -y ros-noetic-pcl-conversions ros-noetic-pcl-ros ros-noetic-tf2-sensor-msgs
    # Ubuntu20.04默认为python3
    sudo apt install python3-pip   
    pip3 install pandas scipy
    
  2. Download code compile and install

    mkdir -p cam_lidar_calibration_ws/src
    cd cam_lidar_calibration_ws/src
    git clone https://github.com/acfr/cam_lidar_calibration.git
    cd ..
    catkin build
    source devel/setup.bash
    

    If you encounter an error about optimize.h when compiling #include <opencv/cv.hpp> does not exist, modify it to #include <opencv2/opencv.hpp>

  3. Test: In the source code, the author should produce the calibration data set for the test, and enter the command to perform the test calibration:

    roslaunch cam_lidar_calibration run_optimiser.launch import_samples:=true
    

    The program is calibrated according to the pose.csv in the am_lidar_calibration_ws/src/cam_lidar_calibration/data/vlp/ folder, and generates a calibration camera and lidar external parameter file in this folder, and each line is the result of the iteration.

    Then get the evaluation calibration results

    roslaunch cam_lidar_calibration assess_results.launch csv:="$(rospack find cam_lidar_calibration)/data/vlp/calibration_quickstart.csv" visualise:=true
    
  4. Run to calibrate the external parameters of your own camera and lidar.

    • Make the calibration board: the calibration board should be as large as possible, the calibration file PDF download , with the set size, the author used A1 - 95mm squares - 7x5 verticies, 8x6 squares . Fix the checkerboard paper on a board so that their centers are aligned as much as possible and the edges are kept parallel. You need to set the parameters later. The picture below shows A1 size 7*5 internal vertices and 95mm square.
      insert image description here

    • Setting parameters: major modifications cam_lidar_calibration/cfg/camera_info.yamlandparams.yaml

      • camera_info.yaml: Set whether it is a fisheye camera, pixel width and height, internal reference matrix and distortion coefficient. Self-calibration of camera parameters, ROS official calibration tutorial

        distortion_model: "non-fisheye"
         width: 640
        height: 480
        D: [0,0,0,0]
        K: [617.68,0.0,325.963,0.0,617.875,242.513,0.0,0.0,1]
        
      • params.yaml

        # Topics
        camera_topic: "/camera/color/image_raw"
        camera_info: "/camera/color/camera_info"
        lidar_topic: "/velodyne_points"
        
        #Dynamic rqt_reconfigure default bounds,点云的选取范围
        feature_extraction:
        x_min: -10.0
        x_max: 10.0
        y_min: -8.0
        y_max: 8.0
        z_min: -5.0
        z_max: 5.0
        
        # Properties of chessboard calibration target
        chessboard:
        pattern_size:  #棋盘的内部顶点7*5
        height: 7
        width: 5  
        square_length: 95 #棋盘格的长度mm
        board_dimension:  # 安装棋盘打印的背板的宽度和高度。
        width: 594
        height: 897 
        translation_error: #棋盘中心与背板中心的偏移量(见下图)。
        x: 0
        y: 0
        

        The offset between the center of the chessboard and the center of the backplane
        insert image description here

    • Officially start calibration:

      • Start the program to collect table data and run the command:

        roslaunch cam_lidar_calibration run_optimiser.launch import_samples:=false
        

        The RVIZ and rqt_reconfigure windows will appear. In RVIZ, panels->display modifies the topic of the camera and the frame_id corresponding to the lidar point cloud.

      • Data collection: Segment the point cloud of the calibration board
        Adjust the maximum and minimum values ​​of xyz of rqt_reconfigure/feature_extraction to separate the point cloud of the calibration board from the surrounding environment, so that only the chessboard is displayed. If the checkerboard is not completely isolated, it may affect the planar fitting of the checkerboard, and also cause a large error in the size of the checkerboard. The picture below shows the effect before and after filtering the point cloud:
        insert image description here
        After filtering the surrounding environment point cloud, click Capture sample to collect samples in rviz, and the green box will appear to represent the calibration plate plane fitted according to the point cloud, as shown in the figure: If the green fitted rectangle does not appear, you need to readjust the XYZ range of the point cloud, and then perform capture sample to collect samples
        insert image description here
        . At least 7 rings are required for the laser point cloud to be printed on the calibration board.

      • Calibration:
        Collect at least 3 samples to run the calibration. The more samples the better, it is best to collect more than 10 samples, and then click optimize in rviz to calibrate. During the optimization process, a folder with the current time and date will be generated in cam_lidar_calibration/data to store the collected images, point cloud pcd, pose, and external reference files of the camera and lidar after calibration.

      • Evaluate parameters and reprojection error:

        roslaunch cam_lidar_calibration assess_results.launch csv:="$(rospack find cam_lidar_calibration)/data/2022-02-10_13-48-12/calibration_2022-02-10_14-13-41.csv" visualise:=true
        

        A reprojection effect image appears:
        insert image description here
        Calibration parameters and reprojection errors appear on the terminal:

        ---- Calculating average reprojection error on 12 samples ---- 
        1/ 12 | dist= 4.069m, dimerr=  39.768mm | error:   2.948pix  -->  20.033mm
        2/ 12 | dist= 3.759m, dimerr=  43.992mm | error:   2.358pix  -->  14.689mm
        3/ 12 | dist= 3.038m, dimerr=  61.650mm | error:   0.240pix  -->   1.213mm
        4/ 12 | dist= 2.951m, dimerr=  52.597mm | error:   0.129pix  -->   0.628mm
        5/ 12 | dist= 2.913m, dimerr= 120.838mm | error:   0.782pix  -->   3.798mm
        6/ 12 | dist= 2.665m, dimerr=  76.147mm | error:   3.219pix  -->  14.106mm
        7/ 12 | dist= 2.482m, dimerr=  69.678mm | error:   2.728pix  -->  11.332mm
        8/ 12 | dist= 2.307m, dimerr=  36.923mm | error:   2.659pix  -->  10.284mm
        9/ 12 | dist= 2.118m, dimerr= 110.454mm | error:   6.066pix  -->  21.332mm
        10/ 12 | dist= 2.504m, dimerr=  66.145mm | error:   1.402pix  -->   5.853mm
        11/ 12 | dist= 2.845m, dimerr=  11.296mm | error:   1.408pix  -->   6.616mm
        12/ 12 | dist= 2.828m, dimerr=  98.653mm | error:   0.719pix  -->   3.536mm
        
        Calibration params (roll,pitch,yaw,x,y,z): -1.5416,-0.0047,-1.4423,0.0059,-0.0062,0.0149
        
        Mean reprojection error across  12 samples
        - Error (pix) =  2.055 pix, stdev =  1.663
        - Error (mm)  =  9.452 mm , stdev =  7.007
        

Based on the above code, I changed it a bit, the lidar camera joint calibration in this will be more accurate https://github.com/Xujianhong123Allen/sensor_calibration/tree/master/cam_lidar_calibration


Guess you like

Origin blog.csdn.net/weixin_41681988/article/details/122867113
Recommended