Reading notes (last book—ROS book⑥) "ROS By Example v1" A Do-It-Yourself Guide to the Robot Operating System

This book is a relatively classic book for learning ROS, but I feel that it is not the first book for beginners to read. As a technical novice, I always feel confused when I first read this book. In the fog, even now I feel the same when reading the v2 version.

1. The opening chapter of this book answers a key question: Where to start learning ROS?

Stage 1: Learn basic concepts and programming skills

Among them, in stage 1, you can refer to the ROS Wiki for installation instructions and beginner tutorials.

At the same time, you also need to master TF coordinate transformation to understand how ROS handles different frames.

Ask questions using ROS Answers

Stage 2: Using ROS to control the robot

In Stage 2: Refer to this book and use ROS to make the robot complete some specific tasks. The code in the book can be applied to real-world robots, rotating pan/tilts, and cameras for detecting faces. By the end of the book, the robot you control will be able to autonomously navigate to your home or office, respond to your verbal commands, and combine vision and motion control to track faces or follow people around the house.

2. Selection of operating system and ROS version 

Ubuntu Linux is the ROS operating system officially recommended by the Open Source Robot Foundation; and Ubuntu is free and easy to install.

Install Ubuntu on the Windows operating system and switch the operating system when using it later.

It's better to install Ubuntu on an actual computer so it runs faster.

Installing Ubuntu in a virtual machine such as VMware is not highly recommended, although it is a good approach, because the virtual machine may crash when running intensive programs such as Rviz.

3. Use Linux

Search for "Ubuntu tutorial" and learn to use the Ubuntu terminal.

4. ROS concept review

The core entities in ROS are called nodes. Nodes are usually small programs written in Python or C++ that perform some relatively simple tasks or processes. Nodes can be started and stopped independently of each other, and they communicate by passing messages. A node can publish messages on a specific topic or provide services to other nodes.

For example, a publisher node might report data from sensors connected to the robot's microcontroller. A message with a value of 0.5 on the /head_sonar topic means that the sensor is currently detecting an object 0.5 meters away. (ROS uses meters to measure distances and radians to measure angles.) Any node that wants to know the value of this sensor only needs to subscribe to the /head_sonar topic. To use these values, the subscriber node defines a callback function that is executed whenever a new message arrives on the subscribed topic. How often this happens depends on the rate at which the publisher node updates its messages.

A node can also define one or more services. When a request is sent from another node, the ROS service takes some action or sends back a reply. A simple example is a service that turns an LED on or off. A more complex example is a service that returns a navigation plan for a mobile robot given a target position and the robot's starting pose.

Higher level ROS nodes will subscribe to many topics and services, combine the results in useful ways, and possibly publish messages or provide services of their own. For example, the object tracker node we will develop later in the book subscribes to camera messages for a set of video subjects and publishes movement commands for another subject, which are read by the robot's base controller to move the robot in the appropriate direction. .

5. ROS publish/subscribe architecture

When using ROS, the first step is to partition the desired behavior into independent functions that can be handled by separate nodes. For example, if your robot uses a webcam or a depth camera such as a Kinect or Xtion Pro, one node will connect to the camera and simply publish the image and/or depth data so that other nodes can use it. If your robot uses a mobile base, the base controller node will listen for motion commands from a subject and control the robot's motors to move the robot accordingly. These nodes can be used in many different applications without modification as long as the desired behavior requires visual and/or motion control.

6. launch startup file

To run the application, we use a ROS launch file to launch the entire collection of nodes as a group. Remember that launch files can also contain other launch files to make it easier to reuse existing code in new applications.

7. Units and coordinate systems

Before we can send movement commands to our robot, we need to look at the measurement units and coordinate system conventions used in ROS.

When using a reference frame, remember that ROS uses the right-hand convention to orient axes, as shown in the image to the left. The index finger and middle finger point to the positive direction of the x-axis and y-axis, and the thumb points to the positive direction of the z-axis. The direction of rotation about an axis is defined by the right-hand rule shown on the right: If you point your thumb in the positive direction of any axis, your fingers will bend in the direction of positive rotation. For a mobile robot using ROS, the x-axis points forward, the y-axis points to the left, and the z-axis points up. According to the right-hand rule, positive rotation of the robot around the z-axis is counterclockwise, while negative rotation is clockwise.

      

ROS uses the metric system, so linear velocity is always specified in meters per second (m/s) and angular velocity in radians per second (rad/s). For an indoor robot (about 1.1 mph), a linear speed of 0.5 m/s is actually quite fast, while an angular speed of 1.0 rad/s is equivalent to approximately one rotation in 6 seconds or 10 RPM. When in doubt, start slowly and gradually increase the speed. For indoor robots, I tend to keep the maximum linear speed at 0.2 m/s or less.

 8. Motion Control Level

Most differential drive robots running ROS use encoders on the drive motors or wheels. The encoder records a certain number of ticks (usually hundreds or even thousands) per revolution of the corresponding wheel. Knowing the diameter of the wheels and the distance between them, the encoder scale can be converted into distance traveled in meters or angle of rotation in radians. To calculate velocity, these values ​​are simply divided by the time interval between measurements.

This internal motion data is collectively called odometry, and ROS uses it extensively. It helps if your robot has accurate and reliable encoders, but wheel data can be augmented with other sources. For example, the original TurtleBot used a single-axis gyroscope to provide additional measurements of the robot's rotational motion, because the iRobot Create's encoder was significantly inaccurate during rotation.

No matter how many odometry data sources are used, the robot's actual position and speed in the world can (and probably will) differ from the values ​​reported by the odometry. The degree of difference will vary depending on environmental conditions and the reliability of the odometer source.

9. Twist and turn using ROS

ROS uses the Twist message type to issue motion commands used by basic controllers. The topic is /cmd_vel, which is short for "Command Velocity". The base controller node subscribes to the /cmd_vel topic and converts Twist messages into motor signals that actually turn the wheels.

To view the components of a Twist message, run the following command:

$ rosmsg show geometry_msgs/Twist

This will produce the following output:

geometry_msgs/Vector3 linear

float64 x

float64 y

float64 z

geometry_msgs/Vector3 angular

float64 x

float64 y

float64 z

As you can see, the Twist message consists of two sub-messages of type Vector3, one for the x, y and z linear velocity components and the other for the x, y and z angular velocity components. Linear velocity is measured in meters per second and angular velocity is measured in radians per second. (One radian is approximately equal to 57 degrees.)

For a differentially driven robot operating in a two-dimensional plane (such as a floor), we only need the linear x component and the angular z component. This is because this type of robot can only move forward/backward along its longitudinal axis, and can only rotate around its vertical axis. In other words, the linear y and z components are always zero (the robot cannot move laterally or vertically), while the angular x and y components are always zero because the robot cannot rotate about these axes. An omnidirectional robot will also use the linear y component, while an aerial or underwater robot will use all six components.

Twist message example

Suppose we want the robot to move forward in a straight line at a speed of 0.1 meters per second. This would require a Twist message with linear values ​​x=0.1, y=0 and z=0 and angular values ​​x=0, y=0 and z=0. If you were to specify this Twist message on the command line, the message part would be of the following form:

'{linear: {x: 0.1, y: 0, z: 0}, angular: {x: 0, y: 0, z: 0}}'

Notice how we use curly braces to describe sub-messages, and a colon and a space (spaces are required!) to separate the component name from its value. While this may seem like a lot of typing, we rarely control robots this way. Twist messages will be sent to the robot using other ROS nodes.

To rotate counterclockwise at an angular velocity of 1.0 radians per second, the required Twist message would be:

'{linear: {x: 0, y: 0, z: 0}, angular: {x: 0, y: 0, z: 1.0}}'

If we combine these two pieces of information, the robot will move forward while turning left. The generated Twist message will be:

'{linear: {x: 0.1, y: 0, z: 0}, angular: {x: 0, y: 0, z: 1.0}}'

The larger the angular z value, the tighter the turn compared to the linear x value.

10. Using Odometry

When we ask a robot to move or rotate at a certain speed, how do we know it is actually doing what we ask? For example, if we publish a Twist message to make the robot move forward at 0.2 m/s, how do we know that the robot is not actually moving at 0.18 m/s? How do we know both wheels are even traveling at the same speed?

The robot's basic controller node uses odometry and PID control to translate motion requests into real-world speeds. The accuracy and reliability of this process depend on the robot's internal sensors, the accuracy of the calibration procedure, and environmental conditions. (For example, some surfaces may allow slight wheel slip, which disrupts the mapping between encoder counts and distance traveled.)

The robot's internal odometry can supplement external measurements of the robot's position and/or orientation. For example, wall-mounted visual markers can be used, for example, as benchmarks together with the ROS packages ar_pose, ar_kinect or ar_track_alvar to provide fairly accurate positioning of the robot within a room.

Similar techniques use visual feature matching without the need for manual labeling (ccny_rgbd_tools, rgbdslam, RTABMap), while another package (laser_scan_matcher) uses laser scan matching. Outdoor robots often use GPS to estimate location, among other forms of odometry.

For the purposes of this book, we will use the term "Odometry" to refer to internal position data. However, no matter how one measures the odometer, ROS provides a message type to store the information; namely nav_msgs/odometer. abbreviated

The Odometry message type is defined as follows:

Header header

string child_frame_id

geometry_msgs/PoseWithCovariance pose

geometry_msgs/TwistWithCovariance twist

Here, we see that the Odometry message consists of a Header, a string identifying the child_frame_id and two sub-messages, one for PoseWithCovariance and one for TwistWithCovariance.

To view the defined extended version, run the following command:

$ rosmsg show nav_msgs/Odometry

This should produce the following output:

Header header

uint32 seq

time stamp

string frame_id

string child_frame_id

geometry_msgs/PoseWithCovariance pose

geometry_msgs/Pose pose

geometry_msgs/Point position

float64 x

float64 y

float64 z

geometry_msgs/Quaternion orientation

float64 x

float64 y

float64 z

float64 w

float64[36] covariance

geometry_msgs/TwistWithCovariance twist

geometry_msgs/Twist twist

geometry_msgs/Vector3 linear

float64 x

float64 y

float64 z

geometry_msgs/Vector3 angular

float64 x

float64 y

float64 z

float64[36] covariance

The PoseWithCovariance sub-message records the robot's position and orientation, while the TwistWithCovariance component gives us the linear and angular velocities we've already seen. Both pose and distortion can be supplemented with a covariance matrix, which measures the uncertainty in various measurements.

Header and child_frame_id define the reference frame we use to measure distances and angles. It also provides a timestamp for each message, so we know not only where we are, but when. By convention, odometry measurements in ROS use /odom as the parent frame ID and /base_link (or /base_footprint) as the child frame ID. The /base_link frame corresponds to the real physical part of the robot, while the /odom frame is defined by translations and rotations encapsulated in odometry data. These transformations move the robot relative to the /odom frame. If we display the robot model in RViz and set the fixed frame to the /odom frame, the robot's position will reflect where the robot "thinks" it is relative to its starting position.

Guess you like

Origin blog.csdn.net/qq_38250687/article/details/122543533