livox_lidar_camera_calibration学习--标定外参验证

开源代码位于:GitHub - Shelfcol/livox_camera_lidar_calibration_modified: livox_camera_lidar_calibration的改进

1. 将点云投影到图片上

roslaunch camera_lidar_calibration projectCloud.launch

1.加载对应的图片和bag文件,相机内参和畸变系数,外参。

2.根据投影模型,求解lidar在图像平面的投影坐标

x = lidar_datas[i].points[j].x;
y = lidar_datas[i].points[j].y;
z = lidar_datas[i].points[j].z;

getTheoreticalUV(theoryUV, intrinsic, extrinsic, x, y, z);
int u = floor(theoryUV[0] + 0.5); // 四舍五入像素值
int v = floor(theoryUV[1] + 0.5);

 3. 根据激光点的x值求解像素RGB

主要思想是将x方向分为10块,太近的为b,太远的为rb,中间10块又分为0,1,2,4,7,10,每个区间对不同的颜色通道进行插值

// set the color by distance to the cloud
void getColor(int &result_r, int &result_g, int &result_b, float cur_depth) {
    float scale = (max_depth - min_depth)/10;
    if (cur_depth < min_depth) { // 最近的为蓝色
        result_r = 0;
        result_g = 0;
        result_b = 0xff;
    }
    else if (cur_depth < min_depth + scale) {
        result_r = 0;
        result_g = int((cur_depth - min_depth) / scale * 255) & 0xff;
        result_b = 0xff;
    }
    else if (cur_depth < min_depth + scale*2) {
        result_r = 0;
        result_g = 0xff;
        result_b = (0xff - int((cur_depth - min_depth - scale) / scale * 255)) & 0xff;
    }
    else if (cur_depth < min_depth + scale*4) {
        result_r = int((cur_depth - min_depth - scale*2) / scale * 255) & 0xff;
        result_g = 0xff;
        result_b = 0;
    }
    else if (cur_depth < min_depth + scale*7) {
        result_r = 0xff;
        result_g = (0xff - int((cur_depth - min_depth - scale*4) / scale * 255)) & 0xff;
        result_b = 0;
    }
    else if (cur_depth < min_depth + scale*10) {
        result_r = 0xff;
        result_g = 0;
        result_b = int((cur_depth - min_depth - scale*7) / scale * 255) & 0xff;
    }
    else {
        result_r = 0xff;
        result_g = 0;
        result_b = 0xff;
    }
}



int r,g,b;
getColor(r, g, b, x); // 根据x的值来确定RGB

4. 最终效果图 

2.给点云附上对应像素的颜色

roslaunch camera_lidar_calibration colorLidar.launch

1. 加载一张对应的图片和bag文件,相机内参K和畸变系数,外参。

2. 对图像进行畸变去除

// use intrinsic matrix and distortion matrix to correct the photo first 去畸变
cv::Mat  map1, map2;
cv::Size imageSize = src_img.size();
cv::initUndistortRectifyMap(K, distortion_coef, cv::Mat(),cv::getOptimalNewCameraMatrix(K, distortion_coef, imageSize, 1, imageSize, 0), imageSize, CV_16SC2, map1, map2);
cv::remap(src_img, src_img, map1, map2, cv::INTER_LINEAR);  // correct the distortion

3. 获取照片上每个像素的RGB

int row = src_img.rows;
int col = src_img.cols;
// 获取像素的RGB
vector<vector<int>> color_vector;
color_vector.resize(row*col);
for (unsigned int i = 0; i < color_vector.size(); ++i) {
    color_vector[i].resize(3);
}

// read photo and get all RGB information into color_vector
ROS_INFO("Start to read the photo ");
for (int v = 0; v < row; ++v) {
    for (int u = 0; u < col; ++u) {
        // for .bmp photo, the 3 channels are BGR
        color_vector[v*col + u][0] = src_img.at<cv::Vec3b>(v, u)[2];
        color_vector[v*col + u][1] = src_img.at<cv::Vec3b>(v, u)[1];
        color_vector[v*col + u][2] = src_img.at<cv::Vec3b>(v, u)[0];
    }
}

这里需要注意的是:

Mat类的rows(行)对应IplImage结构体的height(高),行与高对应point.y   
Mat类的cols (列)对应IplImage结构体的width  (宽),列与宽对应point.x   

inline cv::Vec3b &cv::Mat::at<cv::Vec3b>(int row, int col)   row在前,col在后
这个不难理解,opencv的坐标系原点在左上角,但是还是水平轴是x,垂直轴是y

color_vector[v*col + u][0] = src_img.at<cv::Vec3b>(v, u)[2];

color_vector[v*col + u][1] = src_img.at<cv::Vec3b>(v, u)[1];

color_vector[v*col + u][2] = src_img.at<cv::Vec3b>(v, u)[0];

opencv图片的3个通道是BGR

4. 获取点云对应的RGB

// use extrinsic and intrinsic to get the corresponding U and V 使用投影矩阵
void getUV(const cv::Mat &matrix_in, const cv::Mat &matrix_out, float x, float y, float z, float* UV) {
    double matrix3[4][1] = {x, y, z, 1};
    cv::Mat coordinate(4, 1, CV_64F, matrix3);
    
    // calculate the result of u and v
    cv::Mat result = matrix_in*matrix_out*coordinate;
    float u     = result.at<double>(0, 0);
    float v     = result.at<double>(1, 0);
    float depth = result.at<double>(2, 0);
    
    UV[0] = u / depth;
    UV[1] = v / depth;
}

// get RGB value of the lidar point
void getColor(const cv::Mat &matrix_in, const cv::Mat &matrix_out, float x, float y, float z, int row, int col, const vector<vector<int>> &color_vector, int* RGB) {
    float UV[2] = {0, 0}; 
    getUV(matrix_in, matrix_out, x, y, z, UV);  // get U and V from the x,y,z
    
    int u = int(UV[0]);
    int v = int(UV[1]);

    int32_t index = v*col + u;
    if (index < row*col && index >= 0) {
        RGB[0] = color_vector[index][0];
        RGB[1] = color_vector[index][1];
        RGB[2] = color_vector[index][2];
    }
}



// set the RGB for the cloud point  
int RGB[3] = {0, 0, 0}; 
getColor(matrix_in, matrix_out, x, y, z, row, col, color_vector, RGB);  // 获取对应像素的RG

5. 播放数据

3. 总结

像素和点云的可视化主要就是利用投影矩阵得到点云和像素的对应关系

s \begin{bmatrix} u\\v\\1 \end{bmareix} = \begin{bmatrix} f_x&0&c_x\\0&f_y&c_y\\0&0&1\end{bmareix} \begin{pmatrix} \begin{bmatrix} R&t\\0&1 \end{bmareix} \begin{bmatrix} P\\1 \end{bmareix} \end{pmareix}_{1:3}

s \begin{bmatrix} u\\v\\1 \end{bmareix} = \begin{bmatrix} f_x&0&c_x\\0&f_y&c_y\\0&0&1\end{bmareix} \begin{bmatrix} X'\\Y'\\Z '\end{bmareix}

s=Z'

\begin{bmatrix} u\\v\\1 \end{bmareix} = \begin{bmatrix} f_x&0&c_x\\0&f_y&c_y\\0&0&1\end{bmareix} \begin{bmatrix} \frac{X'}{Z'}\\ \frac{Y'}{Z'}\\1\end{bmareix}

猜你喜欢

转载自blog.csdn.net/qq_38650944/article/details/124133378