Robot SLAM and autonomous navigation (two)-prerequisites

Robot SLAM and autonomous navigation (two)-prerequisites

Overview

The related function packages of SLAMda and autonomous navigation in ROS can be universally applied to various mobile robot platforms, but in order to achieve the best results, there are still the following three requirements for the robot's hardware.
1) The navigation function package has a good effect on differential and wheeled robots, and it is assumed that the robot can be directly controlled by speed commands.

  • linear: The linear velocity of the robot in the xyz three-axis direction, the unit is m/s.
  • angular: the angular velocity of the robot in the xyz three-axis direction, the unit is rad/s.
    2) The navigation function package requires that the robot must be equipped with ranging equipment such as lidar to obtain environmental depth information.
    3) The navigation function package is developed with square and circular robots as templates. For robots with other shapes, although it can be used normally, the effect may not be good.

1. Sensor information

1. Depth information of the environment
Whether it is SLAM or autonomous navigation, it is essential to obtain the depth information of the surrounding environment. To obtain depth information, we must first understand how the depth information in ROS is represented.
For lidar, ROS defines a special data structure-LaserScan in the sensor_msgs package, which is used to store laser information. The specific definition of the LaserScan message is as follows.
Insert picture description here

  • angle_min: the starting angle of the detectable range
  • angle_max: the end angle of the detectable range
  • angle_increment: the angle step between adjacent data frames
  • time_increment: Collect the time step between adjacent data frames, and use it for compensation when the sensor is in relative motion
  • scan_time: the time required to collect one frame of data
  • range_min: The threshold of the nearest detectable depth
  • range_max: the threshold of the farthest detectable depth
  • ranges: a storage array of one frame of depth data.
    If the robot used does not have a lidar, but is equipped with an RGB-D camera such as Kinect, the depth information of the surrounding environment can also be obtained through an infrared camera. However, the original depth information obtained by the RGB-D camera is 3D point cloud data, and the input required by many ROS function packages is 2D laser data. Is it possible to convert 3D data into 2D data?
    The method of reducing the dimensionality of three-dimensional data to two-dimensional data is to cut off a large amount of data, extract only one row of data, and repackage it as a LaserScan message to obtain the required two-dimensional lidar information. Although it loses a lot of valid data, it can just meet the needs of 2D SLAM.
    ROS also provides a corresponding function package-depthimage_to_laserscan, which converts 3D point cloud data into 2D lidar data.
<!--depthimage_to_laserscan节点,将点云深度数据转换成激光数据-->
<node pkg="depthimage_to_laserscan" type="depthimage_to_laserscan" name="depthimage_to_laserscan" output="screen">
    <remap from="image" to="/kinect/depth/image_raw"/>
    <remap from="camera_info" to=/kinect/depth/camera_info"/>
    <remap froam="scan" to="/scan"/>
    <param name="output_frame_id" value="/camera_link"/>
</node>

2. Odometer information The
odometer estimates the position change of the robot over time based on the data obtained by the sensor. In the robot platform, the more common odometer is the encoder, for example, the rotary encoder equipped with the driving wheel of the robot. When the robot is moving, the rotation of the wheel can be measured with the help of a rotary encoder. If the circumference of the wheel is known, the speed per unit time and the moving distance of the robot can be calculated. The odometer obtains the position based on the integration of speed and time. This method is very sensitive to errors, so it is very necessary to take measures such as precise data collection, equipment calibration, and data filtering.
The navigation function package requires the robot to be able to publish nav_msgs/Odometry messages. The nav_msgs/Odometry message contains the estimated position and velocity of the robot in free space.

  • pose: the coordinates of the current position of the robot, including the x, y, z three-axis position and direction parameters of the robot, and the covariance matrix used to correct errors
  • twist: The current motion state of the robot, including the linear velocity and angular velocity of the x, y, and z axes, and the covariance matrix used to correct errors.
    Note: All coordinate systems in ROS are right-handed coordinate systems.
    Insert picture description here
    In its data structure, in addition to the key information of speed and position, it also contains the covariance matrix used for the filtering algorithm. In a robot system with low accuracy requirements, the default covariance matrix can be used; and in a system with high accuracy requirements, the robot needs to be accurately modeled first, and then the specific values ​​of the matrix are determined by simulation, experiment, etc. .

2. Simulation platform

1. Create simulation environment
2. Load robot
3. Real robot

Guess you like

Origin blog.csdn.net/weixin_45661757/article/details/114004915