Practice of autonomous navigation car (two)

In this part, we first introduce the software framework of the autonomous navigation car, and then introduce the configuration process of each part. (This chapter mainly analyzes the algorithm framework of the autonomous navigation car, and the text is relatively large. I hope everyone can read it patiently!)

2.1 Overall analysis

The realization of autonomous navigation car is generally divided into three parts: mapping, path planning, and control. Mapping refers to the establishment of a 2D raster map or a 3D point cloud map based on the data of the lidar or vision camera. The algorithms based on the 2D lidar mapping generally include Gmapping, Hector, and Cartograph. The mapping algorithm of 3D lidar generally uses Google's Cartographer algorithm. These algorithms have been integrated into ROS as a functional package. Path planning is to calculate a path with the least cost in an existing map (this path can also be drawn by you), and control is to control the car to follow a preset trajectory to achieve trajectory tracking.

Ros already has its own set of autonomous navigation packages (this old picture that has remained unchanged for thousands of years). The picture given by ROS is very classic and basically contains the framework of the autonomous navigation car. We follow the functional blocks of the autonomous navigation car. Let’s analyze this graph.Insert picture description here

1. Assuming that we have obtained a point cloud image through gmapping or other algorithms, take a 2D map as an example (usually two files xxx.pgm and xxx.yaml will be obtained). Next, we use the scanned map for autonomous navigation,

2. We use the map_server package to load an existing navigation map and convert it into a map available for navigation (specifically, a costmap). This costmap provides the cost of each point in the map. In other words, costmap is the type of map provided by the node map_server. The map_server node provides two 2-dimensional 3-layer maps, two of which refer to global_costmap and local_costmap, and the three layers include static layer (static layer), obstacle layer (dynamic layer), and inflation layer (expansion layer).

3. The second element is positioning. There are two local wheel odometers and AMCL packages in the positioning data sources of the car based on laser navigation. Among them, AMCL (called adaptive Monte Carlo positioning), this package mainly uses the data obtained by the current lidar to search and match in the existing map, and locate the location of the car on the map. The output of the AMCL package is a TF transformation (rotation and translation) between "odom" and "map" coordinates. Note: In the autonomous navigation, the position relationship between the sensors and the position of the car on the map are all expressed by TF transformation, such as the transformation between "odom" and "base_link" output by wheel odometer, chassis and The relative installation position of the lidar is indicated by the TF between "base_link" and "laser". In this way, a TF tree can be formed, and the TF transformation (ie, rotation and translation closed loop) between any two nodes on the tree can be found through the TF tree.

4. After the map needed for navigation and the current position of the robot are known, we can start planning the route. The move_base package consists of a global planner and a local planner. The global planner receives the global_costmap map provided by map_server (the map type remains the same It is costmap) to plan out the global path. The algorithms used here are usually A and D algorithms. The local planner dynamically plans on the local_costmap map to realize obstacle avoidance, bypass and other functions. The algorithms involved are the artificial potential field method and the DWA algorithm. The
move_base contains the local planner and the global planner. This package requires input The current speed information of the robot (the transformation between "odom" and "base_link" output by the odometer) and the position of the robot in the map (the TF transformation between "odom" and "map" output by the ACML module). Calculate and output the chassis speed and turning angle control parameters to the topic "/cmd_vel". Finally, the robot chassis node receives speed and angle control commands from this topic to achieve closed-loop control of the chassis.

Insert picture description here
In summary, in general, the algorithm structure of the autonomous navigation car is shown in the figure above. Mapping, positioning, path planning, and control are the four basic elements to realize autonomous navigation vehicles. Among them, motion planning is mainly provided by the move_base package. At this point, we have basically figured out the algorithm framework of the autonomous navigation car and its components. Next, we will configure each module step by step to realize the car’s mapping and path planning.

2.2 Driver package configuration

First, we configure the ROS driver package of the sensor on the industrial computer of the car. The car used in our experiment used two additional sensors, namely Silan 2D lidar and Beitong remote control handle. Secondly, the chassis of the Pioneer robot also requires a ROS driver package, so we need to configure three driver packages. First, we create a "pioneer3at_pkg" folder in the workspace to store all the ROS packages and configuration files needed for navigation.
Insert picture description here
1. The package of lidar and gamepad is relatively simple, just clone it from Github and compile it directly.
Lidar:

cd ~/catkin_ws/src/ pioneer3at_pkg

git clone https://github.com/ncnynl/rplidar_ros.git   #激光雷达驱动包
cd catkin_ws/src/pioneer3at_pkg/rplidar_ros/scripts  #拷贝规则文件
sudo cp rplidar.rules /etc/udev/rules.d/

cd ~/catkin_ws/
catkin_make
source / catkin_ws/devel/setup.bash

Gamepad:

sudo apt-get install ros-kinetic-joystick-*

2. For the chassis drive package of Pioneer Robot, please refer to the section 2.1-2.8 of my previous blog .
3. Since the message type of the handle is inconsistent with the type of topic (/cms_vel) accepted by the control node of the chassis, it cannot be directly forwarded to the topic of the chassis, so we write the following node (named: "transfor_joy") to realize the forwarding of the handle The data is transferred to the robot chassis to realize the control of the Pioneer robot chassis through the handle. The code is here. The
source file is as follows:

#include <ros/ros.h> 
#include <geometry_msgs/Twist.h>
#include <sensor_msgs/Joy.h> 

using namespace std;

ros::Publisher vel_pub_;
ros::Subscriber joy_sub_; 
  
double l_scale_=0.5, a_scale_=0.5; 


void joyCallback(const sensor_msgs::Joy::ConstPtr& joy) 
{ 
	geometry_msgs::Twist twist;  
	if(joy->buttons[2])
	{
		if(joy->buttons[0])
		{
			l_scale_=0.5;
			a_scale_=0.5;
		}
		else if(joy->buttons[1])
		{
			l_scale_=1;
			a_scale_=1;
		}
		else;
		twist.angular.z = a_scale_*joy->axes[0]; 
		twist.linear.x = l_scale_*joy->axes[1]; 
	}
	else
	{
		twist.angular.z = 0; 
		twist.linear.x = 0; 
	} 

	vel_pub_.publish(twist);
	//cout<<"I recived the data: "<<twist.linear.x <<"  "<<twist.angular.z <<endl;

}

int main(int argc, char** argv) 
{ 
 	ros::init(argc, argv, "transfor_joy");
 	ros::NodeHandle nh_; 
	vel_pub_ = nh_.advertise<geometry_msgs::Twist>("cmd_vel", 1); 
	joy_sub_ = nh_.subscribe<sensor_msgs::Joy>("joy", 10, &joyCallback); 

	ros::spin();
}

At the same time create the launch file " transfor_joy.launch "

<launch>
  <node pkg="transfor_joy" type="transfor_joy" name="transfor_joy">

  <!-- remap from="/cmd_vel" to="/turtle1/cmd_vel"/-->
  </node>

  <!-- Joy node -->
  <node pkg="joy" type="joy_node" name="joy_node">

    <param name="dev" value="/dev/input/js0" type="string "/>
    <param name="deadzone" value="0.05" type="double"/>    
    <param name="autorepeat_rate" value="10" type="double"/> 

  </node>
 </launch>

4. After the compilation is passed, we modify the launch file in the p3at_bringup package, and start the chassis driver, handle, lidar, and handle data forwarding node at the same time. (Of course you can also write a new launch file yourself)

<?xml version="1.0" encoding="UTF-8"?>
<launch>
  <!-- Load URDF file -->
  <include file="$(find p3at_description)/launch/state_publisher.launch" />

  <!-- Load robot driver -->
  <node name="RosAria" pkg="rosaria" type="RosAria">
    <param name="port" value="/dev/pioneer3at" />
    <remap from="/RosAria/pose" to="/odom"/>
    <remap from="/RosAria/cmd_vel" to="/cmd_vel"/>
  </node>

   <include file="$(find rplidar_ros)/launch/rplidar_a3.launch" />
  
  <include file="$(find transfor_joy)/launch/transfor_joy.launch" />
  
</launch>

Finally execute in the workspace:

source / catkin_ws/devel/setup.bash
roslaunch p3at_bringup p3at_bringup.launch

At this point, you can control the robot chassis through the remote control. You can start a rviz to subscribe to the lidar data topic to check whether the lidar output is normal. If you use other robot chassis or self-made robot chassis, just replace the chassis package we configured with your own chassis drive package.

In this section, we have realized the reading of all sensor data on the car, here we will start to configure the map package and navigation package. In this section, we want to realize the use of gmapping algorithm to scan the spatial map.

2.3 Drawing package configuration

Gmapping involves the relationship of four coordinate systems when running, they are "odom", "base_link", "laser", and "map". The TF conversion between "odom" and "base_link" is generally provided by the chassis output, which represents the movement information of the odometer. This TF information is released by this node when we start the chassis node (p3at_bringup).
The relationship between "base_link" and "laser" refers to the positional relationship between the center of the chassis and the lidar. This is determined according to our installation position. We need to publish such a TF relationship ourselves, or in the description of the robot Set this TF in the file. In this way, we can get the transformation relationship of any two of "odom", "base_link", and "laser" through the TF tree, and the relationship between "odom" and "map" is output by gmapping.
Here we mentioned a robot description file. Generally speaking, the standard experimental platform we buy will provide a ROS package named "xxx_description", which provides a 3D model of the robot under URDF. (You can find the model package corresponding to the robot you bought on the Internet. If you are a robot designed by yourself, you need to write this yourself)
Insert picture description here
We can start this package through the command started by the ROS node to see the 3D of the robot model
Insert picture description here
and then modify the laser radar and the transformation relationship wherein the center of the chassis (i.e. the relationship between the provided TF "base_link" and "laser").
Insert picture description here
After configuring the coordinate system relationship, next we need to configure the operating parameters of gmapping (the operating parameters I set myself are as follows), create a new folder "p3at_navigation", create a folder named "launch" in this folder, and write gmapping.

<?xml version="1.0"?>
<launch>

  <arg name="scan_topic" default="scan" />

  <node pkg="gmapping" type="slam_gmapping" name="slam_gmapping" output="screen">

    <param name="odom_frame" value="odom"/>
    <param name="base_frame" value="base_link"/>
    <param name="map_frame" value="map"/>

    <!-- Process 1 out of every this many scans (set it to a higher number to skip more scans)  -->
    <param name="throttle_scans" value="1"/>

    <param name="map_update_interval" value="5.0"/> <!-- default: 5.0 -->

    <!-- The maximum usable range of the laser. A beam is cropped to this value.  -->
    <param name="maxUrange" value="5.0"/>

    <!-- The maximum range of the sensor. If regions with no obstacles within the range of the sensor should appear as free space in the map, set maxUrange < maximum range of the real sensor <= maxRange -->
    <param name="maxRange" value="10.0"/>

    <param name="sigma" value="0.05"/>
    <param name="kernelSize" value="1"/>
    <param name="lstep" value="0.05"/>
    <param name="astep" value="0.05"/>
    <param name="iterations" value="5"/>
    <param name="lsigma" value="0.075"/>
    <param name="ogain" value="3.0"/>
    <param name="minimumScore" value="0.0"/>
    <!-- Number of beams to skip in each scan. -->
    <param name="lskip" value="0"/>

    <param name="srr" value="0.01"/>
    <param name="srt" value="0.02"/>
    <param name="str" value="0.01"/>
    <param name="stt" value="0.02"/>

    <!-- Process a scan each time the robot translates this far  -->
    <param name="linearUpdate" value="0.1"/>

    <!-- Process a scan each time the robot rotates this far  -->
    <param name="angularUpdate" value="0.05"/>

    <param name="temporalUpdate" value="-1.0"/>
    <param name="resampleThreshold" value="0.5"/>

    <!-- Number of particles in the filter. default 30        -->
    <param name="particles" value="10"/>

<!-- Initial map size  -->
    <param name="xmin" value="-10.0"/>
    <param name="ymin" value="-10.0"/>
    <param name="xmax" value="10.0"/>
    <param name="ymax" value="10.0"/>

    <!-- Processing parameters (resolution of the map)  -->
    <param name="delta" value="0.02"/>

    <param name="llsamplerange" value="0.01"/>
    <param name="llsamplestep" value="0.01"/>
    <param name="lasamplerange" value="0.005"/>
    <param name="lasamplestep" value="0.005"/>

    <remap from="scan" to="$(arg scan_topic)"/>
  </node>
</launch>
 

Next, we start the corresponding nodes to scan the map:
First, we have to start the three nodes of lidar, joystick and chassis. Since we have modified the launch file of p3at_bringup in 2.4, it can start these three sensors at the same time. So here we directly use the p3at_bringup command to start three nodes

source / catkin_ws/devel/setup.bash
roslaunch p3at_bringup p3at_bringup.launch

Open a new terminal, and then start the 3D model of the robot, provide the TF relationship between "base_link" and "laser" and let the robot model be seen in RVIZ.

source / catkin_ws/devel/setup.bash
roslaunch p3at_description state_publisher.launch

Finally, open a new terminal and start the gmapping node

source / catkin_ws/devel/setup.bash
roslaunch p3at_ navigation gmapping.launch

At this time, you can use the joystick to remotely control the robot to run, and use the lidar to obtain data and scan the map in the space.
Insert picture description here
After scanning, you need to run the map_server node to save the map, and finally get two files (xxx.png and xxx.yaml)

rosrun map_server map_saver -f mymap 

.png is mainly a map file, .yaml contains map parameters, accuracy and the like.

Note: We have not set move_base here, so we can only use the remote control to remotely control the car to walk and scan the picture. After you read the subsequent tutorials, you can run the move_base package and the gmapping package together, so that you can set a target point through rviz The robot walked over by itself, planning and building maps.

Welcome to leave a message to discuss or see the
code of [email protected] on GitHub: https://github.com/RuPingCen/P3AT_Auto_Navigation

Previous: Practice of Autonomous Navigation Car (1) Next: Practice of Autonomous Navigation Car (3)

Guess you like

Origin blog.csdn.net/crp997576280/article/details/99701706
Recommended