livox_lidar_camera_calibration learning--calibration external parameter verification

The open source code is located at: GitHub - Shelfcol/livox_camera_lidar_calibration_modified: Improvements to livox_camera_lidar_calibration

1. Project point cloud onto image

roslaunch camera_lidar_calibration projectCloud.launch

1. Load the corresponding pictures and bag files, camera internal parameters, distortion coefficients, and external parameters.

2. According to the projection model, solve the projection coordinates of the lidar on the image plane

x = lidar_datas[i].points[j].x;
y = lidar_datas[i].points[j].y;
z = lidar_datas[i].points[j].z;

getTheoreticalUV(theoryUV, intrinsic, extrinsic, x, y, z);
int u = floor(theoryUV[0] + 0.5); // 四舍五入像素值
int v = floor(theoryUV[1] + 0.5);

 3. Solve the pixel RGB based on the x value of the laser point

The main idea is to divide the x direction into 10 blocks, the ones that are too close are b, the ones that are too far are rb, and the 10 blocks in the middle are divided into 0, 1, 2, 4, 7, and 10. Each interval is used for different color channels. interpolation

// set the color by distance to the cloud
void getColor(int &result_r, int &result_g, int &result_b, float cur_depth) {
    float scale = (max_depth - min_depth)/10;
    if (cur_depth < min_depth) { // 最近的为蓝色
        result_r = 0;
        result_g = 0;
        result_b = 0xff;
    }
    else if (cur_depth < min_depth + scale) {
        result_r = 0;
        result_g = int((cur_depth - min_depth) / scale * 255) & 0xff;
        result_b = 0xff;
    }
    else if (cur_depth < min_depth + scale*2) {
        result_r = 0;
        result_g = 0xff;
        result_b = (0xff - int((cur_depth - min_depth - scale) / scale * 255)) & 0xff;
    }
    else if (cur_depth < min_depth + scale*4) {
        result_r = int((cur_depth - min_depth - scale*2) / scale * 255) & 0xff;
        result_g = 0xff;
        result_b = 0;
    }
    else if (cur_depth < min_depth + scale*7) {
        result_r = 0xff;
        result_g = (0xff - int((cur_depth - min_depth - scale*4) / scale * 255)) & 0xff;
        result_b = 0;
    }
    else if (cur_depth < min_depth + scale*10) {
        result_r = 0xff;
        result_g = 0;
        result_b = int((cur_depth - min_depth - scale*7) / scale * 255) & 0xff;
    }
    else {
        result_r = 0xff;
        result_g = 0;
        result_b = 0xff;
    }
}



int r,g,b;
getColor(r, g, b, x); // 根据x的值来确定RGB

4. Final renderings 

2. Attach the color of the corresponding pixel to the point cloud

roslaunch camera_lidar_calibration colorLidar.launch

1. Load a corresponding picture and bag file, camera internal parameter K, distortion coefficient, and external parameters.

2. Remove distortion from the image

// use intrinsic matrix and distortion matrix to correct the photo first 去畸变
cv::Mat  map1, map2;
cv::Size imageSize = src_img.size();
cv::initUndistortRectifyMap(K, distortion_coef, cv::Mat(),cv::getOptimalNewCameraMatrix(K, distortion_coef, imageSize, 1, imageSize, 0), imageSize, CV_16SC2, map1, map2);
cv::remap(src_img, src_img, map1, map2, cv::INTER_LINEAR);  // correct the distortion

3. Get the RGB of each pixel on the photo

int row = src_img.rows;
int col = src_img.cols;
// 获取像素的RGB
vector<vector<int>> color_vector;
color_vector.resize(row*col);
for (unsigned int i = 0; i < color_vector.size(); ++i) {
    color_vector[i].resize(3);
}

// read photo and get all RGB information into color_vector
ROS_INFO("Start to read the photo ");
for (int v = 0; v < row; ++v) {
    for (int u = 0; u < col; ++u) {
        // for .bmp photo, the 3 channels are BGR
        color_vector[v*col + u][0] = src_img.at<cv::Vec3b>(v, u)[2];
        color_vector[v*col + u][1] = src_img.at<cv::Vec3b>(v, u)[1];
        color_vector[v*col + u][2] = src_img.at<cv::Vec3b>(v, u)[0];
    }
}

What needs to be noted here is:

The rows (rows) of the Mat class correspond to the height (height) of the IplImage structure, and the rows and height correspond to point.y. The   
cols (columns) of the Mat class correspond to the width (width) of the IplImage structure, and the columns and width correspond to point.x.   

inline cv::Vec3b &cv::Mat::at<cv::Vec3b>(int row, int col) row is in front, col is in the back. This is not difficult to
understand. The origin of the coordinate system of opencv is in the upper left corner, but it is still the horizontal axis. is x and the vertical axis is y

color_vector[v*col + u][0] = src_img.at<cv::Vec3b>(v, u)[2];

color_vector[v*col + u][1] = src_img.at<cv::Vec3b>(v, u)[1];

color_vector[v*col + u][2] = src_img.at<cv::Vec3b>(v, u)[0];

The 3 channels of the opencv picture are BGR

4. Get the RGB corresponding to the point cloud

// use extrinsic and intrinsic to get the corresponding U and V 使用投影矩阵
void getUV(const cv::Mat &matrix_in, const cv::Mat &matrix_out, float x, float y, float z, float* UV) {
    double matrix3[4][1] = {x, y, z, 1};
    cv::Mat coordinate(4, 1, CV_64F, matrix3);
    
    // calculate the result of u and v
    cv::Mat result = matrix_in*matrix_out*coordinate;
    float u     = result.at<double>(0, 0);
    float v     = result.at<double>(1, 0);
    float depth = result.at<double>(2, 0);
    
    UV[0] = u / depth;
    UV[1] = v / depth;
}

// get RGB value of the lidar point
void getColor(const cv::Mat &matrix_in, const cv::Mat &matrix_out, float x, float y, float z, int row, int col, const vector<vector<int>> &color_vector, int* RGB) {
    float UV[2] = {0, 0}; 
    getUV(matrix_in, matrix_out, x, y, z, UV);  // get U and V from the x,y,z
    
    int u = int(UV[0]);
    int v = int(UV[1]);

    int32_t index = v*col + u;
    if (index < row*col && index >= 0) {
        RGB[0] = color_vector[index][0];
        RGB[1] = color_vector[index][1];
        RGB[2] = color_vector[index][2];
    }
}



// set the RGB for the cloud point  
int RGB[3] = {0, 0, 0}; 
getColor(matrix_in, matrix_out, x, y, z, row, col, color_vector, RGB);  // 获取对应像素的RG

5. Play data

3. Summary

The visualization of pixels and point clouds mainly uses the projection matrix to obtain the corresponding relationship between point clouds and pixels.

s \begin{bmatrix} u\\v\\1 \end{bmareix} = \begin{bmatrix} f_x&0&c_x\\0&f_y&c_y\\0&0&1\end{bmareix} \begin{pmatrix} \begin{bmatrix} R&t\\0&1 \end{bmareix} \begin{bmatrix} P\\1 \end{bmareix} \end{pmareix}_{1:3}

s \begin {bmatrix} u\\v\\1 \end{bmareix} = \begin{bmatrix} f_x&0&c_x\\0&f_y&c_y\\0&0&1\end{bmareix} \begin{bmatrix} X'\\Y'\\Z '\end{bmareix}

s=Z'

\begin{bmatrix} u\\v\\1 \end{bmareix} = \begin{bmatrix} f_x&0&c_x\\0&f_y&c_y\\0&0&1\end{bmareix} \begin{bmatrix} \frac{X'}{Z'} \\ \frac{Y'}{Z'}\\1\end{bmareix}

Guess you like

Origin blog.csdn.net/qq_38650944/article/details/124133378