Autoware calibration camera radar external parameters
One, compile and install Autoware
I installed it according to the official
Second, calibrate the internal parameters of the camera
1. Calibrate internal parameters in any way
No matter what tool or method is used to calibrate the internal parameters, just copy the internal parameter matrix and distortion matrix to the auto internal parameter file (don’t forget to change the image size in the file), the auto internal parameter file is as follows, the file name: autoware_camera_calibration.yaml:
%YAML:1.0
---
CameraExtrinsicMat: !!opencv-matrix
rows: 4
cols: 4
dt: d
data: [ 1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 0.,
1. ]
CameraMat: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 1.0358646240234375e+03, 0., 6.3327093505859375e+02, 0.,
1.0368857421875000e+03, 3.6562481689453125e+02, 0., 0., 1. ]
DistCoeff: !!opencv-matrix
rows: 1
cols: 5
dt: d
data: [ 2.4196624755859378e-02, -3.8360595703125000e-02,
-1.6517639160156200e-03, -6.3323974609375000e-04, 0. ]
ImageSize: [1280, 720]
Reprojection Error: 0.0
DistModel: plumb_bob
CameraMat is the internal parameter matrix, DistCoeff is the distortion matrix, and CameraExtrinsicMat is the initialized external parameter matrix (when standard internal parameters are not used, just use the default value)
2. Use Autoware to calibrate internal parameters
rosrun autoware_camera_lidar_calibrator cameracalibrator.py --square SQUARE_SIZE --size MxN image:=/image_topic
parameter list:
parameter | description |
---|---|
SQUARE_SIZE | The size of the checkerboard, in meters, for example 0.025 |
MxN | The corner point of the checkerboard , for example , the checkerboard M×N of 12 9 is 11 8 |
image | Image topic, change it to the topic posted by your own camera |
The topic of the image can be
captured with the camera in real time, or with the bag package recorded in advance . The operation flow after starting autoware:
1. Move the chessboard in the camera's field of view until the X, Y, and Scale bars in the interface turn green
2 If step 1
is ok , the CALIBRATE button will turn green, click it 3. Press the SAVE button, the calibration file is saved in the home directory, and the name is YYYYmmdd_HHMM_autoware_camera_calibration.yaml
Three, calibrate the camera external parameters
roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/PATH/TO/YYYYmmdd_HHMM_autoware_camera_calibration.yaml image_src:=/image_topic
parameter list:
parameter | description |
---|---|
intrinsics_file | Calibrated internal reference file |
image_src | Image topic, change it to the topic posted by your own camera |
After the startup is successful, an image viewer will pop up showing the image of the camera topic, and then open rviz in a terminal and subscribe to the point cloud.
Start the calibration process:
1. Observe the image and the point cloud at the same time
2. Find a pixel in the image corresponding to the point cloud (to find 9 sets of corresponding points)
3. Click on a pixel of the image
4. Then Use the Publish Point tool in rviz to click the corresponding 3D point in Rviz
5. Then click on the 9 groups of corresponding points. The successfully calibrated file is in the home directory, and the name is: YYYYmmdd_HHMM_autoware_lidar_camera_calibration.yaml. The
content of my calibrated file is as follows:
%YAML:1.0
---
CameraExtrinsicMat: !!opencv-matrix
rows: 4
cols: 4
dt: d
data: [ -7.7790843100493945e-02, -2.4332492096283465e-02,
-9.9667272188923595e-01, 2.9736582189798355e-02,
9.9696831005122966e-01, -3.5685379128176375e-03,
-7.7726792618500185e-02, 7.8517058864235878e-03,
-1.6653778276735642e-03, -9.9969755194536147e-01,
2.4536323097440582e-02, -2.1072627604007721e-01, 0., 0., 0., 1. ]
CameraMat: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 1.0368479264581563e+03, 0., 6.3824154867849927e+02, 0.,
1.0365568752556060e+03, 3.6724821987437957e+02, 0., 0., 1. ]
DistCoeff: !!opencv-matrix
rows: 1
cols: 5
dt: d
data: [ 1.6957648272811977e-02, 1.8810925993432863e-02,
-7.4924769244694373e-04, -2.0282223380795470e-04,
-7.6387620890063776e-02 ]
ImageSize: [ 1280, 720 ]
ReprojectionError: 0
DistModel: plumb_bob
What needs to be reminded is that this external parameter is camera_to_lidar, that is, the external parameter from the camera to the radar, not the external parameter from the radar to the camera. If you need to use the external parameter from the radar to the camera, you need to invert the external parameter matrix.
Four, test
Fuse the point cloud to the image, that is, lidar_to_camera
git clone this code to the ros workspace to modify the corresponding file