Autoware.universe Deployment 05: Real Vehicle Debugging


This article introduces the deployment of Autoware.universe on real vehicles. Other articles in this series:
Autoware.universe deployment 01: Installing Autoware.universe in Ubuntu20.04 and jointly debugging it with Awsim
Autoware.universe deployment 02: Drawing high-precision Lanelet2 map
Autoware. universe deployment 03: joint debugging with Carla (binary version)
Autoware.universe deployment 04: universe sensor ROS2 driver

1. Mapping

1.1 Point cloud map

Unlike AI, the official does not provide a mapping function in the universe. The official recommends other mapping algorithms: Autoware-documentation
records point cloud bags and uses other SLAM algorithms to construct maps (it is best to use a SLAM algorithm that can be combined with GNSS, so that Create a point cloud map of the absolute coordinate system, which is convenient for later use of RTK), such as LIO-SAM

1.2 High-precision map

Use Autoware.universe Deployment 02: High-precision Lanelet2 map drawing method to draw high-precision maps. Due to limited space, only a one-way lane line, a small parking lot and a small parking space were drawn.
Insert image description here

2. Parameter configuration

(1) Modify the autoware.launch.xml file (you can leave it unchanged here, you can pass external parameters later)
in line 17 of src/launcher/autoware_launch/autoware_launch/launch/autoware.launch.xml. Change true in launch_sensing_driver to false. Instead of using the driver that comes with Autoware, you can directly copy the following and replace line 17.

<arg name="launch_sensing_driver" default="false" description="launch sensing driver"/>

(2) Modify the gnss.launch.xml file.
In line 4 of src/sensor_kit/sample_sensor_kit_launch/sample_sensor_kit_launch/launch/gnss.launch.xml, change 1 in coordinate_system to 2. You can directly copy the following to replace line 4.

<arg name="coordinate_system" default="2" description="0:UTM, 1:MGRS, 2:PLANE"/>

Add the following on line 5:

<arg name="plane_zone" default="0"/>

(3) If GNSS is not used, set GNSS to unavailable in /src/universe/autoware.universe/launch/tier4_localization_launch/launch/util/util.launch.xml

<arg name="gnss_enabled" value="false"/>
<!-- <arg name="gnss_enabled" value="true"/> -->

(4) Modification of preprocessing parameters
You may need to modify the parameters of point cloud preprocessing, which can be found below autoware_universe/src/universe/autoware.universe/launch/tier4_localization_launch/config, especially the voxel filtering parameters:
Insert image description here

3. Sensor data communication interface

According to the official flow chart, you can see the data communication between each node: https://autowarefoundation.github.io/autoware-documentation/galactic/design/autoware-architecture/node-diagram/
Insert image description here
It can also be seen from Carla simulation It is known that Carla outputs and receives the following messages:
Insert image description here

in:

  • /control/command/contral_cmdIt is a message sent by Autoware to control the chassis of the car. /op_ros2_agent will convert it into a CAN message and then control the chassis;
  • /sensing/imu/tamagawa/imu_rawIt is an IMU message, used to obtain the odometer, and the frame_id is tamagawa/imu_link;
  • /sensing/gnss/ublox/nav_sat_fixIt is a GNSS message, used for pose initialization (optional), frame_id is gnss_link;
  • /sensing/camera/traffic_light/image_rawIt is an image message, used to identify traffic lights, frame_id is camera4/camera_link;
  • /vehicla/statusFeedback speed and other status messages for the car (see CAN code for details);
  • /Carla_pointcloudFor the point cloud output by Caral, the point cloud input by Autoware is sum /sensing/lidar/top/outlier_filtered/pointcloud( /sensing/lidar/concatenated/pointcloudframe_id is both base_link). The following /carla_pointcloud_interface node will convert /Carla_pointcloud /sensing/lidar/top/outlier_filtered/pointcloudfor positioning and /sensing/lidar/concatenated/pointcloudperception;
    Insert image description here

3.1 Radar point cloud

The following node is written to velodyneconvert the coordinates of /points_raw (frame_id is) (or your radar driver output) to base_link(requires radar external parameters), and then publish /sensing/lidar/top/outlier_filtered/pointcloudand/sensing/lidar/concatenated/pointcloud

#include <rclcpp/rclcpp.hpp>
#include <sensor_msgs/msg/point_cloud2.hpp>
#include <tf2_geometry_msgs/tf2_geometry_msgs.h>
#include <pcl_ros/transforms.hpp>

class LidarTransformNode : public rclcpp::Node
{
    
    
public:
    // 构造函数
    LidarTransformNode() : Node("points_raw_transform_node")
    {
    
    
        // 初始化坐标转换参数
        // this->declare_parameter<double>("transform_x", 0.0);
        // this->get_parameter("transform_x", transform_x);
        transform_x = this->declare_parameter("transform_x", 0.0);
        transform_y = this->declare_parameter("transform_y", 0.0);
        transform_z = this->declare_parameter("transform_z", 0.0);
        transform_roll = this->declare_parameter("transform_roll", 0.0);
        transform_pitch = this->declare_parameter("transform_pitch", 0.0);
        transform_yaw = this->declare_parameter("transform_yaw", 0.0);

        std::cout << "velodyne to base_link:" << std::endl
                  << "  transform_x:    " << transform_x << std::endl
                  << "  transform_y:    " << transform_y << std::endl
                  << "  transform_z:    " << transform_z << std::endl
                  << "  transform_roll: " << transform_roll << std::endl
                  << "  transform_pitch:" << transform_pitch << std::endl
                  << "  transform_yaw:  " << transform_yaw << std::endl;

        // Initialize translation
        transform_stamp.transform.translation.x = transform_x;
        transform_stamp.transform.translation.y = transform_y;
        transform_stamp.transform.translation.z = transform_z;
        // Initialize rotation (quaternion)
        tf2::Quaternion quaternion;
        quaternion.setRPY(transform_roll, transform_pitch, transform_yaw);
        transform_stamp.transform.rotation.x = quaternion.x();
        transform_stamp.transform.rotation.y = quaternion.y();
        transform_stamp.transform.rotation.z = quaternion.z();
        transform_stamp.transform.rotation.w = quaternion.w();

        // 创建发布者
        publisher_1 = this->create_publisher<sensor_msgs::msg::PointCloud2>(
            "/sensing/lidar/top/outlier_filtered/pointcloud",
            10);
        publisher_2 = this->create_publisher<sensor_msgs::msg::PointCloud2>(
            "/sensing/lidar/concatenated/pointcloud",
            10);

        // 订阅原始点云消息
        subscription_ = this->create_subscription<sensor_msgs::msg::PointCloud2>(
            "/points_raw",
            10,
            std::bind(&LidarTransformNode::pointCloudCallback, this, std::placeholders::_1));
    }

private:
    void pointCloudCallback(const sensor_msgs::msg::PointCloud2::SharedPtr msg)
    {
    
    
        // 过滤掉距离传感器较近的点
        pcl::PointCloud<pcl::PointXYZI>::Ptr xyz_cloud(new pcl::PointCloud<pcl::PointXYZI>);
        pcl::PointCloud<pcl::PointXYZI>::Ptr pcl_output(new pcl::PointCloud<pcl::PointXYZI>);
        pcl::fromROSMsg(*msg, *xyz_cloud);
        for (size_t i = 0; i < xyz_cloud->points.size(); ++i)
        {
    
    
          if (sqrt(xyz_cloud->points[i].x * xyz_cloud->points[i].x + xyz_cloud->points[i].y * xyz_cloud->points[i].y +
                   xyz_cloud->points[i].z * xyz_cloud->points[i].z) >= RadiusOutlierFilter && !isnan(xyz_cloud->points[i].z))
          {
    
    
            pcl_output->points.push_back(xyz_cloud->points.at(i));
          }
        }
        sensor_msgs::msg::PointCloud2 output;
        pcl::toROSMsg(*pcl_output, output);
        output.header = msg->header;
    
        // 执行坐标转换
        sensor_msgs::msg::PointCloud2 transformed_cloud;
        pcl_ros::transformPointCloud("base_link", transform_stamp, output, transformed_cloud);
    
        // 发布转换后的点云消息
        publisher_1->publish(transformed_cloud);
        publisher_2->publish(transformed_cloud);
    }
  
    rclcpp::Subscription<sensor_msgs::msg::PointCloud2>::SharedPtr subscription_;
    rclcpp::Publisher<sensor_msgs::msg::PointCloud2>::SharedPtr publisher_1, publisher_2;

    double transform_x, transform_y, transform_z, transform_roll, transform_pitch, transform_yaw;
    geometry_msgs::msg::TransformStamped transform_stamp;
};

int main(int argc, char **argv)
{
    
    
    // 初始化节点
    rclcpp::init(argc, argv);

    // 创建实例
    auto node = std::make_shared<LidarTransformNode>();

    rclcpp::spin(node);
    rclcpp::shutdown();

    return 0;
}

3.2 Image

Since Autoware receives /sensing/camera/traffic_light/image_rawimage messages for traffic light recognition (fusion sensing can use 7 other cameras, but it is optional, so the image is not used for fusion sensing), the frame_id is, so modify the frame_id camera4/camera_linkin the parameter file:

/**:
    ros__parameters:
      # 设备挂载点
      video_device: "/dev/video2"
      # 帧率
      framerate: 30.0
      io_method: "mmap"
      # 坐标系
      frame_id: "camera4/camera_link"
      # 像素格式
      pixel_format: "mjpeg2rgb"  # see usb_cam/supported_formats for list of supported formats
      # 像素尺寸
      image_width: 1920
      image_height: 1080
      # 相机名称
      camera_name: "JR_HF868"
      # 标定参数
      camera_info_url: "package://usb_cam/config/camera_info.yaml"
      # 亮度
      brightness: 50
      # 对比度
      contrast: 50
      # 饱和度
      saturation: 40
      # 清晰度
      sharpness: 70
      # 增益
      gain: -1
      # 白平衡
      auto_white_balance: true
      white_balance: 4000
      # 曝光
      autoexposure: true
      exposure: 100
      # 聚焦
      autofocus: false
      focus: -1

Modify the publishing topic in the node code:

  UsbCamNode::UsbCamNode(const rclcpp::NodeOptions &node_options)
      : Node("usb_cam", node_options),
        m_camera(new usb_cam::UsbCam()),
        m_image_msg(new sensor_msgs::msg::Image()),
        m_image_publisher(std::make_shared<image_transport::CameraPublisher>(
            image_transport::create_camera_publisher(this, "/sensing/camera/traffic_light/image_raw",
                                                     rclcpp::QoS{
    
    100}.get_rmw_qos_profile()))),
        m_parameters(),
        m_camera_info_msg(new sensor_msgs::msg::CameraInfo()),
        m_service_capture(
            this->create_service<std_srvs::srv::SetBool>(
                "set_capture",
                std::bind(
                    &UsbCamNode::service_capture,
                    this,
                    std::placeholders::_1,
                    std::placeholders::_2,
                    std::placeholders::_3)))

3.3 IMU

Modify the release topic name /sensing/imu/tamagawa/imu_rawand the coordinate system to tamagawa/imu_link
Insert image description here
launch and also modify it:
Insert image description here

3.4 GNSS RTK

4. Real vehicle debugging

4.1 Write startup

Put the previously written sensor drivers into the Autoware workspace, and write a unified startup for them:
Insert image description here

<launch>
    <!-- 雷达驱动 -->
    <include file="$(find-pkg-share rslidar_sdk)/launch/start.py" />
    <!-- 转换为Autoware需要的 -->
    <include file="$(find-pkg-share points_raw_transform)/launch/points_raw_transform.launch.xml" >
      <!-- LiDAR to base link -->
      <arg name="transform_x" value="0.75"/>
      <arg name="transform_y" value="0.35"/>
      <arg name="transform_z" value="0.6"/>
      <arg name="transform_roll" value="0.0"/>
      <arg name="transform_pitch" value="0.0"/>
      <arg name="transform_yaw" value="0.0"/>
      <arg name="RadiusOutlierFilter" value="2.5"/>
    </include>

    <!-- 相机驱动 -->
    <include file="$(find-pkg-share usb_cam)/launch/JR_HF868.launch.py" />

    <!-- 生成假的GNSS和IMU数据,用于调试 -->
    <!-- <include file="$(find-pkg-share gnss_imu_sim)/launch/both.launch.py" /> -->

    <!-- GNSS模块驱动(可选,这里未使用) -->
    <!-- <include file="$(find-pkg-share gnss_imu_sim)/launch/M620.launch.py" /> -->
    <!-- IMU模块驱动 -->
    <include file="$(find-pkg-share wit_ros2_imu)/rviz_and_imu.launch.py" />

    <!-- CAN驱动 -->
    <include file="$(find-pkg-share can_ros2_bridge)/can.launch.py" />

    <!-- 启动Autoware -->
    <include file="$(find-pkg-share autoware_launch)/launch/autoware.launch.xml">
      <!-- 地图路径 -->
      <arg name="map_path" value="/home/car/Autoware/QinHang"/>
      <!-- 车辆和传感器模型 -->
      <arg name="vehicle_model" value="sample_vehicle"/>
      <arg name="sensor_model" value="sample_sensor_kit"/>
      <!-- 不使用仿真时间 -->
      <arg name="use_sim_time" value="false"/>
      <!-- 不使用Autoware自带的传感器驱动 -->
      <arg name="launch_sensing_driver" value="false"/>
    </include>
</launch>

After the above sensors are debugged, a simple test can be used to locate and identify traffic lights:

# 编译添加的几个功能包
# colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-select colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-select rslidar_msg rslidar_sdk points_raw_transform usb_cam wit_ros2_imu can_ros2_bridge gnss_imu_sim sensor_driver
source install/setup.bash
ros2 launch sensor_driver senser_driver.launch.xml

Insert image description here

4.2 Modify sensor external parameters

Install several sensors used, and then modify their external parameters (mainly modify the IMU and the camera used to identify traffic lights, radar, we have converted it to the base_link coordinate system in the conversion node), the external parameters need to be calibrated (you can also directly use the ruler measurement, but not accurate), please refer to my article for the calibration method: Calibration summary of each SLAM sensor: Camera/IMU/LiDAR
src/sensor_kit/sample_sensor_kit_launch/sample_sensor_kit_description/config/sensor_kit_calibration.yaml:

sensor_kit_base_link:
  camera0/camera_link:
    x: 0.10731
    y: 0.56343
    z: -0.27697
    roll: -0.025
    pitch: 0.315
    yaw: 1.035
  camera1/camera_link:
    x: -0.10731
    y: -0.56343
    z: -0.27697
    roll: -0.025
    pitch: 0.32
    yaw: -2.12
  camera2/camera_link:
    x: 0.10731
    y: -0.56343
    z: -0.27697
    roll: -0.00
    pitch: 0.335
    yaw: -1.04
  camera3/camera_link:
    x: -0.10731
    y: 0.56343
    z: -0.27697
    roll: 0.0
    pitch: 0.325
    yaw: 2.0943951
  camera4/camera_link:
    x: 0.07356
    y: 0.0
    z: -0.0525
    roll: 0.0
    pitch: -0.03
    yaw: -0.005
  camera5/camera_link:
    x: -0.07356
    y: 0.0
    z: -0.0525
    roll: 0.0
    pitch: -0.01
    yaw: 3.125
  traffic_light_right_camera/camera_link:
    x: 0.75
    y: 0.13
    z: 0.60
    roll: 1.57
    pitch: 0.0
    yaw: 1.57
  traffic_light_left_camera/camera_link:
    x: 0.57
    y: 0.56
    z: 0.60
    roll: 1.57
    pitch: 0.0
    yaw: 1.57
  velodyne_top_base_link:
    x: 0.0
    y: 0.0
    z: 0.0
    roll: 0.0
    pitch: 0.0
    yaw: 1.575
  velodyne_left_base_link:
    x: 0.0
    y: 0.56362
    z: -0.30555
    roll: -0.02
    pitch: 0.71
    yaw: 1.575
  velodyne_right_base_link:
    x: 0.0
    y: -0.56362
    z: -0.30555
    roll: -0.01
    pitch: 0.71
    yaw: -1.580
  gnss_link:
    x: -0.1
    y: 0.0
    z: -0.2
    roll: 0.0
    pitch: 0.0
    yaw: 0.0
  tamagawa/imu_link:
    x: 0.75
    y: 0.35
    z: 0.60
    roll: 0.0
    pitch: 0.0
    yaw: 0.0

4.3 Modify body parameters

In order to adapt to our vehicle, the body parameters need to be modified (the following parameters can be measured by yourself) src/vehicle/sample_vehicle_launch/sample_vehicle_description/config/vehicle_info.param.yaml:

/**:
  ros__parameters:
    wheel_radius: 0.18 # The radius of the wheel, primarily used for dead reckoning.车轮半径,用于航位推算
    wheel_width: 0.12 # The lateral width of a wheel tire, primarily used for dead reckoning.车轮横向宽度,用于航位推算
    wheel_base: 1.03 # between front wheel center and rear wheel center 轮距,前后轮中心距离
    wheel_tread: 0.7 # between left wheel center and right wheel center 轴距,左右轮中心距离
    front_overhang: 0.18 # between front wheel center and vehicle front,前悬架,前轮中心与车头距离
    rear_overhang: 0.22 # between rear wheel center and vehicle rear 后悬架
    left_overhang: 0.0 # between left wheel center and vehicle left 左悬伸,左轮中心与车左侧距离
    right_overhang: 0.0 # between right wheel center and vehicle right 又悬伸
    vehicle_height: 1.0  # 车高
    max_steer_angle: 0.45 # [rad] 最大转弯半径

Recompile after modifying the above content:

# 更新传感器外参,更新汽车车身参数(轮距等)
colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release --packages-select sample_sensor_kit_description sample_vehicle_description vehicle_info_util

You can use the following command to view the minimum turning radius of the car calculated according to the above parameters:

ros2 run vehicle_info_util min_turning_radius_calculator.py

Insert image description here

4.4 Real vehicle debugging

start up:

source install/setup.bash
ros2 launch sensor_driver senser_driver.launch.xml

Insert image description here
The performance of the computer is too poor when it is not plugged in, which makes it unable to run...To be continued

Guess you like

Origin blog.csdn.net/zardforever123/article/details/132538871