The autoware calibration tool performs joint calibration of solid-state lidar and camera and uses the calibration results for projection (C++)

This article mainly introduces the joint calibration process of Sagitar’s RS-LIDAR-M1 solid-state radar laser and the left eye of the MINI camera, and introduces some techniques and lightning protection during the calibration process to speed up the calibration efficiency. Finally, the key code for projection using the calibration results is given.

1. Install autoware (for visualization after calibration, optional, not used in this article)
refer to https://blog.csdn.net/qq_41545537/article/details/109312868

2. Install autoware camera and lidar joint calibration tool
Refer to https://blog.csdn.net/qq_43509129/article/details/109327157

3. Perform calibration
1. Play data
Play the calibration data package collected in advance, the command is as follows:

rosbag play your_calibration_data.bag /rslidar_points:=/points_raw --loop

Among them, the parameter /rslidar_points:=/points_raw is to change the topic of the original solid-state lidar from /rslidar_points to /points_raw, because the topic subscribed in the calibration tool is fixed to /points_raw. The parameter –loop means to play the bag package in a loop.

2. Start the calibration tool in step 2
① Open the terminal in the workspace where the calibration tool is located, and execute:

source devel/setup.bash
rosrun calibration_camera_lidar calibration_toolkit

② Select the image topic of the camera on the pop-up interface. The topic of Xiaomi is /mynteye/left/image_color or /mynteye/left/image_mono, choose according to your own situation.
③ Then select Camera->Velodyne on the pop-up interface and the following interface appears:insert image description here

3. Calibration process
① Pattern Size is the size of the calibration plate, the unit is m, fill in according to the actual situation.
② Pattern Number is the grid number of the calibration board, fill in according to the actual situation. ! ! ! Note: Be sure to pay attention here, according to the posture of the calibration board when you record the data, fill in the vertical quantity in the front, and the horizontal quantity in the back (that is, vertical✖horizontal), and fill in the number of grid intersections, for example, there are 7 grid, fill in 6. (After my test, the error of filling in horizontal✖vertical will be very large)! ! ! After filling, restart the calibration tool to take effect.
③ After startup, under normal circumstances, images and point cloud data are displayed, but the point cloud data display area on the right side looks black due to the viewing angle.
④ Click the black area, press the number key "2" on the large keyboard to switch the viewing angle, and you can see a bird's-eye view of the cloud.
insert image description here
⑤ Press button "B" to change the background to white.
⑥ Long press the button "E" to adjust the point cloud. According to the characteristics of this solid-state lidar with only a 120-degree horizontal field of view, it is positive when there are half of the point cloud on the left and right.
insert image description here
⑦ Press and hold the button "W" to adjust the view angle of the point cloud to the front view.
⑧ Long press the minus key "-" to enlarge the point cloud until the calibration plate can be seen. Now the overall effect should be as follows:
insert image description here
⑨ Press the number key "1" again to switch the viewing angle. Since the resolution of the solid-state lidar is too high, it may not be possible to distinguish the surrounding scenes. If the calibration plate cannot be distinguished, press the button "P" to increase the point size of the point cloud. If it does not work once, press it several times . The final effect is as follows (press the "P" key 4 times):
insert image description here
the above steps must be completed. The entire display interface is actually divided into four parts, the upper right part is a widget for real-time display of point clouds, the upper left part is a widget for real-time display of images; the lower right part is a widget for displaying captured point cloud frames, and the lower left part is a widget for displaying captured image frames .Before performing the subsequent steps, adjust the real-time point cloud display on the upper right part to a very easy-to-select position, which can make the subsequent intercepted point cloud easy to select, otherwise, the adjustment process of the above steps must be repeated for each captured point cloud frame, which is wasteful lot of time.

⑩ At this point, the view angle of the point cloud has been adjusted, and the image interface is enlarged to display the image completely. The effect is as follows:
insert image description here
⑪ At this point, the calibration frame is captured. The selection criteria are: when a certain frame of the calibration board appears completely in the image and in the point cloud, it is considered qualified. As shown in the above figure, the calibration board in the image is just completely displayed, and the calibration board in the point cloud is completely displayed, so it is qualified one frame.
⑫ For a qualified frame, click the "Grab" button in the upper right corner. If the capture is successful (that is, the program can detect the checkerboard in the image), it will be displayed at the bottom of the interface, as shown in the figure below: ⑬ Repeat the capture for about two or three
insert image description here
times Ten frames, in theory: the more frames, the better the calibration effect. Capture as many calibration board frames as possible in different poses.
⑭ After the capture is complete, start to manually select the point cloud. As shown in the figure above, the calibration board is detected in the image. We need to manually select the calibration board in the point cloud to form a corresponding relationship to solve the transformation matrix.
⑮ When selecting, you can enlarge the point cloud interface at the bottom right, and the selected model is a round surface. In the same way, in order to make the point cloud more distinguishable, turn the background into white according to the method mentioned above, and increase the size of the points until the calibration plate is clearly visible.
⑯ Click the left mouse button to select the calibration board, try to make the circular surface in the middle of the calibration board, if you are not satisfied after selection, click the right mouse button to cancel the selection, and re-select, the effect is as follows: ⑰ Do this for each captured
insert image description here
frame Select until all frames are selected.
⑱ Click the "Calibrate" button in the upper right corner.
⑲ After the calculation is completed, click the "Project" button in the upper right corner, and the following effect should appear: the
insert image description here
red dot on the left is the result of projecting the manually selected calibration plate points on the right to the image with the calibrated parameters. In a perfect situation, the red dot should appear at the position we manually selected on the calibration board. But in fact, as long as it roughly matches, it's ok.
⑳ If each frame of image can be basically aligned, it means that the calibration effect is not bad; if the error is large, it means that the calibration failed and needs to be re-calibrated.
㉑ If the calibration is over, click the "Save" button in the upper left corner to save the calibration result, name it and save it to a folder. Then select "No" for the two pop-up interfaces after that.
㉒ So far the whole calibration process is over.

4. Use the calibration results for projection
After the calibration is completed, a file in yml format will be obtained, which contains the external parameters obtained through calibration , camera internal parameters, camera distortion coefficient, image size, and reprojection error . Examples are as follows:

%YAML:1.0
---
CameraExtrinsicMat: !!opencv-matrix
   rows: 4
   cols: 4
   dt: d
   data: [ -5.8407131946527358e-03, -3.2811216518650155e-02,
       9.9944450078028035e-01, 1.9907201492930962e-01,
       -9.9986339451409767e-01, -1.5262476339699793e-02,
       -6.3442199462111493e-03, 6.9461651636179914e-02,
       1.5462159620299232e-02, -9.9934502594776853e-01,
       -3.2717590579534050e-02, -1.5654594735971714e-01, 0., 0., 0., 1. ]
CameraMat: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [ 1.0143281094389476e+03, 0., 6.3163571518821800e+02, 0.,
       1.0096395620868118e+03, 3.2954732055473158e+02, 0., 0., 1. ]
DistCoeff: !!opencv-matrix
   rows: 1
   cols: 5
   dt: d
   data: [ -6.3944068169403991e-03, -1.7957073252917993e-02,
       -1.3865038466759662e-02, 1.5781011631053978e-03,
       1.5292969053996039e-01 ]
ImageSize: [ 1280, 720 ]
ReprojectionError: 9.3588671201531770e-01

The cv::projectPoints function in opencv can be used to project the point cloud to the image. The following is the specific code.

	    // 下列代码段从标定结果文件中读取外参矩阵、内参矩阵、畸变矩阵,
	    // 其中外参矩阵中:前三行前三列是旋转矩阵、第四列是平移向量
	    cv::Mat extrinsic_mat, camera_mat,dist_coeff; //外参矩阵,内参矩阵,畸变矩阵
        cv::Mat rotate_mat,transform_vec; //旋转矩阵,平移向量
	    cv::FileStorage fs_read("your_calibration_file_path", cv::FileStorage::READ); //打开标定结果文件
	    fs_read["CameraExtrinsicMat"] >> extrinsic_mat; //从文件里读取4x4外参矩阵
	    fs_read["CameraMat"] >> camera_mat; //从文件里读取3x3相机内参矩阵
	    fs_read["DistCoeff"] >> dist_coeff; //从文件里读取5x1畸变矩阵
	    fs_read.release(); //关闭文件
	// 下列代码段从外参矩阵中读取旋转矩阵、平移向量
	rotate_mat=cv::Mat(3, 3, cv::DataType<double>::type); // 将旋转矩阵赋值成3x3矩阵
    for(int i=0;i<3;i++)
    {
    
    
        for(int j=0;j<3;j++)
        {
    
    
            rotate_mat.at<double>(i,j)=extrinsic_mat.at<double>(j,i); // 取前三行前三列
        }
    }
    transform_vec=cv::Mat(3, 1, cv::DataType<double>::type); //将平移向量赋值成3x1矩阵
    transform_vec.at<double>(0)=extrinsic_mat.at<double>(1,3);
    transform_vec.at<double>(1)=extrinsic_mat.at<double>(2,3);
    transform_vec.at<double>(2)=-extrinsic_mat.at<double>(0,3);

In the above code, rotate_mat is actually the transposition of the first three rows and first three columns of the external parameter matrix , and transform_vec is not assigned in the order of the fourth column of the external parameter matrix. For specific reasons, refer to https://blog.csdn.net/qq_22059843/article/details/103022451

// 该函数实现点云到图像的投影,将点云上的点投影到图像上并在图像上显示,并将能够投影到图像的点按图像对应像素的颜色对点进行染色
// 输入参数分别为点云和图像
void projection(const pcl::PointCloud<pcl::PointXYZI>::Ptr&ccloud,cv::Mat&img)
{
    
    
    vector<cv::Point3f> points3d; //存储点云点的vcector,必须存储成cv::Point3f格式
    points3d.reserve(ccloud->size()+1); //先给points3d分配足够大的内存空间,避免push_back时频繁复制转移
    cv::Point3f point;
    for(int i=0;i<ccloud->size();i++)
    {
    
    
        point.x=ccloud->points[i].x;
        point.y=ccloud->points[i].y;
        point.z=ccloud->points[i].z;
        points3d.push_back(point); //逐个插入
    }
    vector<cv::Point2f> projectedPoints; //该vector用来存储投影过后的二维点,三维点投影后变二维
    
    // 投影核心函数
    // 第一个参数为原始3d点
    // 第二个参数为旋转矩阵
    // 第三个参数为平移向量
    // 第四个参数为相机内参矩阵
    // 第五个参数为投影结果
    cv::projectPoints(points3d,rotate_mat,transform_vec,camera_mat,dist_coeff,projectedPoints);
    
    pcl::PointXYZRGB point_rgb;
	//遍历投影结果
    for (int i = 0; i<projectedPoints.size(); i++)
    {
    
    
        cv::Point2f p = projectedPoints[i];
        point_rgb.x=ccloud->points[i].x;
        point_rgb.y=ccloud->points[i].y;
        point_rgb.z=ccloud->points[i].z;
        point_rgb.r=0;
        point_rgb.g=0;
        point_rgb.b=0;

		// 由于图像尺寸为720x1280,所以投影后坐标不在图像范围内的点不进行染色(默认黑色)
        if (p.y<720&&p.y>=0&&p.x<1280&&p.x>=0) 
        {
    
    
            point_rgb.r=int(img.at<cv::Vec3b>(p.y,p.x)[2]); //读取像素点的rgb值
            point_rgb.g=int(img.at<cv::Vec3b>(p.y,p.x)[1]);
            point_rgb.b=int(img.at<cv::Vec3b>(p.y,p.x)[0]);
        }
        rgb_cloud->push_back(point_rgb); //对于投影后在图像中的点进行染色后加入点云rgb_cloud
    }
    sensor_msgs::PointCloud2 ros_cloud; // 申明ros类型点云
    pcl::toROSMsg(*rgb_cloud,ros_cloud); // pcl点云转ros点云
    ros_cloud.header.frame_id="rslidar"; //
    cloud_pub.publish(ros_cloud); //cloud_pub是在函数外定义的一个ros点云发布器,将染色后的点云发布
    
    //再次遍历投影结果
    for (int i = 0; i<projectedPoints.size(); i++)
    {
    
    
        cv::Point2f p = projectedPoints[i];
        // 由于图像尺寸为720x1280,所以投影后坐标处于图像范围内的点才在图像中进行标示
        if (p.y<720&&p.y>=0&&p.x<1280&&p.x>=0)
        {
    
    
        	//标示的方式是在对应位置画圆圈
            cv::circle(img, p, 1, cv::Scalar(0, 0,255 ), 1, 8, 0);
        }
    }
    sensor_msgs::ImagePtr msg=cv_bridge::CvImage(std_msgs::Header(),"bgr8",img).toImageMsg(); //利用cv_bridge将opencv图像转为ros图像
    image_pub.publish(msg); //image_pub是在函数外定义的一个ros图像发布器,将标示后的图像发布

}

The effect after projection is as follows, where the image is on the left, and the point cloud after dyeing is on the right (the point cloud looks like an image, because the points are scaled up): For the approximate mathematical details of projection, you can refer to my
insert image description here
previous articlehttps ://blog.csdn.net/qq_38222947/article/details/118220876

Guess you like

Origin blog.csdn.net/qq_38222947/article/details/125146447
Recommended