Robot Learning Project - Project4: Home Service Robot

1 Introduction

The list of steps for the Home Service Robot project is as follows:

2. Environment settings

1) Prerequisites/dependencies
* Gazebo >= 7.0 ROS Kinetic 
* ROS navigation package 

sudo apt-get install ros-kinetic-navigation

* ROS map_server package 

sudo apt-get install ros-kinetic-map-server

* ROS move_base package 

sudo apt-get install ros-kinetic-move-base

* ROS amcl package 

sudo apt-get install ros-kinetic-amcl

( Note: * make >= 4.1(mac, linux), 3.81(Windows)
  * Linux: make is installed by default on most Linux distros
  * Mac: [install Xcode command line tools to get make](https://developer. apple.com/xcode/features/)
  * Windows: [Click here for installation instructions](http://gnuwin32.sourceforge.net/packages/make.htm)
* gcc/g++ >= 5.4
  * Linux: gcc / g++ is installed by default on most Linux distros
  * Mac: same deal as make - [install Xcode command line tools](https://developer.apple.com/xcode/features/)
  * Windows: recommend using [MinGW](http:/ /www.mingw.org/)) 2) Set up a Catkin workspace for
simulation

To program a home service robot, it needs to be connected with different ROS packages. Some of these packages are official ROS packages that provide great tools, others are packages to be created soon. The purpose of this section is to prepare and set up a catkin workspace.

Below is a list of the official ROS packages that will need to be obtained, as well as other packages and directories that will need to be created in later stages of the project. The catkin_ws/src directory should look like this:

Official ROS packages
now import these packages and install them into the src directory of the catkin workspace, making sure to clone the full GitHub directory, not just the package itself.

  1. gmapping:  Using the gmapping_demo.launch launch file, it is easy to perform SLAM and build a map of the environment with a robot equipped with a laser ranging sensor or an RGB-D camera.
  2. turtlebot_teleop: Start the file  with keyboard_teleop.launch , you can use keyboard commands to manually control the robot.
  3. turtlebot_rviz_launchers:  Use the view_navigation.launch launcher file to load pre-configured rviz workspaces. By starting this file, you will save a lot of time as it will automatically load the robot model, trajectory and map.
  4. turtlebot_gazebo:  Use turtlebot_world.launch to deploy turtlebot in the gazebo environment by linking the world file.

* Open Ubuntu Bash and clone the project repository, execute on the command line

sudo apt-get update && sudo apt-get upgrade -y

* execute on the command line

cd Home-Service-Robot/catkin_ws/src 
git clone https://github.com/ros-perception/slam_gmapping.git 
git clone https://github.com/turtlebot/turtlebot.git 
git clone https://github.com/turtlebot/turtlebot_interactions.git 
git clone https://github.com/turtlebot/turtlebot_simulator.git  

* Build and run the code

Packages and Directories
These packages will be installed and directories will be created throughout the course of the project.

  1. map:  In this directory, store gazebo world files and maps generated by SLAM.
  2. scripts:  In this directory, shell scripts are stored.
  3. rvizConfig:  In this directory, custom rviz configuration files are stored.
  4. pick_objects:  Write a node to command the robot to drive to the pick-up area and unload area.
  5. add_markers:  Write a node to model objects with markers in rviz.

The package structure should look like this:

├──                                # Official ROS packages
    |
    ├── slam_gmapping                  # gmapping_demo.launch file                   
    │   ├── gmapping
    │   ├── ...
    ├── turtlebot                      # keyboard_teleop.launch file
    │   ├── turtlebot_teleop
    │   ├── ...
    ├── turtlebot_interactions         # view_navigation.launch file      
    │   ├── turtlebot_rviz_launchers
    │   ├── ...
    ├── turtlebot_simulator            # turtlebot_world.launch file 
    │   ├── turtlebot_gazebo
    │   ├── ...
    ├──                                # Your packages and direcotries
    |
    ├── map                          # map files
    │   ├── ...
    ├── scripts                   # shell scripts files
    │   ├── ...
    ├──rvizConfig                      # rviz configuration files
    │   ├── ...
    ├──pick_objects                    # pick_objects C++ node
    │   ├── src/pick_objects.cpp
    │   ├── ...
    ├──add_markers                     # add_marker C++ node
    │   ├── src/add_markers.cpp
    │   ├── ...
    └──

3.Shell script

A shell script is a file containing a series of commands that can be executed. Usually used to set up the environment, run programs, etc., it is very convenient to start multiple ROS nodes and set parameters from a single roslaunch command. However, when robot software is developed using different software packages, it can become more difficult to track errors and bugs produced by different nodes. This is where shell scripts come in handy! After creating a shell script file to start one or more nodes on separate terminals, you will be able to track the output of different nodes and keep the convenience of running a single command to start all nodes.

launch.shScript

Let's start by creating this launch.sh script in our workspace with the goal of launching Gazebo and Rviz in different terminals. Note that we are using an xterm terminal in the script.

#!/bin/sh
xterm  -e  " gazebo " &
sleep 5
xterm  -e  " source /opt/ros/kinetic/setup.bash; roscore" & 
sleep 5
xterm  -e  " rosrun rviz rviz" 

 The launch.sh shell script launches three terminals and issues one or more commands in each terminal. Now analyze the script to understand what each line means.

code decomposition

#!/bin/sh

This statement is called a shebang. It must be included in every shell script you write, because it specifies the full path to the UNIX interpreter that executes it.

xterm -e " gazebo " &

Use the xterm -e statement to start a new instance of xterminal. In this terminal, we start gazebo with the "gazebo" command. Then we add an & to indicate that another instance of the xterm terminal will be created in a separate statement.

sleep 5

 Use sleep to pause the script for 5 seconds.

xterm -e " source /opt/ros/kinetic/setup.bash;  roscore" &

 Start a second instance of xterm terminal. In this terminal, get a ROS workspace and start a ROS master node.

sleep 5

 Pause the script for another 5 seconds.

xterm -e " rosrun rviz rviz"

Start a third instance of xterm terminal, and run rviz.

Save the script file and execute pemission via chmod +x launch.sh . Then start the shell script with ./launch.sh .

After launching this script, with three xterm terminals open, will be able to track down any bugs or errors that occur. To recap, this script will open the first terminal and start gazebo; then pause for 5 seconds, open a second terminal to start the ROS master; finally pause again for 5 seconds, and open a third terminal to start RVIZ.

Note: try shell script, follow the steps below to customize:

4. SLAM test
The next task of this project is to use the Building Editor in Gazebo to automatically draw the previously designed environment. But before tackling map building, it's important to test whether SLAM can be performed manually via teleoperated robots. The goal of this step is to manually test SLAM.

Write a shell script test_slam.sh, deploy a turtlebot in the environment, control it with keyboard commands, connect it with SLAM package, and visualize the map in rviz. We're using turtlebot for this project right now, but feel free to use other custom bots.

Example: test_slam.sh

#!/bin/sh

path_catkin_ws="/home/workspace/catkin_ws"

xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && roslaunch my_robotb world.launch world_file:=${path_catkin_ws}/src/map/Robotroom1" &

sleep 10
  
xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && roslaunch my_robotb gmapping_demo.launch" &

sleep 5

xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && roslaunch turtlebot_rviz_launchers view_navigation.launch" &

sleep 5

xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && roslaunch my_robotb teleop.launch" 

SLAM Testing Task List
To manually test SLAM, create the test_slam.sh shell script to launch these files: 

Running and Testing
Start the test_slam.sh file, search for an xterminal running keyboard_teleopnode, and start controlling the robot. You don't need to do a full map of the environment, just make sure everything is working properly. Noticed the lower quality of the map, but don't worry about it for now. If all goes well, move on to the next step!

5. Positioning and Navigation Test

The next task of this project is to choose two different goals, according to which the direction is determined, to test the ability of the robot to reach. We refer to these objects as pickup and unloading areas. This section is for testing purposes only to ensure that the robot can reach these locations before autonomously commanding it to move towards them.

We will use the ROS navigation stack, which is based on Dijkstra's - a variant of the Uniform Cost Search algorithm, to plan the trajectory of the robot from the starting point to the goal position . The ROS navigation stack allows the robot to replan a new trajectory when it encounters an obstacle, avoiding any obstacles in its path. This navigation stack is already used when using AMCL for localization and sends the robot a specific goal. If you plan to modify the ROS navigation algorithm, or want to know how it is done, check out the official tutorial on how to write a global path planner plugin in ROS.

Note: Navigate the test task list

Write a test_navigation.sh shell script to start these files:

 Example: test_navigation.sh

#!/bin/sh

path_catkin_ws="/home/workspace/catkin_ws"

xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && roslaunch my_robotb world.launch world_file:=${path_catkin_ws}/src/map/Robotroom1" &

sleep 8
  
xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && roslaunch my_robotb amcl.launch" &

sleep 5

xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && roslaunch turtlebot_rviz_launchers view_navigation.launch" 

test

After starting all the nodes, the particles around the robot are first seen, which means that AMCL can recognize the initial pose of the robot. Now, manually point at two different targets, guide the robot there, and orient itself based on the targets.

6. Achieving Multiple Goals

Previously, the ability to reach multiple targets was tested by manually commanding the robot to use 2D NAV target arrows in rviz. Now, write a node that will communicate with the ROS navigation stack and automatically send successive goals for the robot. As mentioned earlier, the ROS navigation stack is based on Dijkstra's algorithm ( a variant of the unified cost search algorithm) to create a path for the robot while avoiding obstacles on the path. In this project, it is used to send the robot to a predefined target. Check out the related tutorial and check out its documentation.

Here is the c++ code for this node that sends a goal for the robot to reach. Also added comments to help with understanding:

#include <ros/ros.h>
#include <move_base_msgs/MoveBaseAction.h>
#include <actionlib/client/simple_action_client.h>

// Define a client for to send goal requests to the move_base server through a SimpleActionClient
typedef actionlib::SimpleActionClient<move_base_msgs::MoveBaseAction> MoveBaseClient;

int main(int argc, char** argv){
  // Initialize the simple_navigation_goals node
  ros::init(argc, argv, "simple_navigation_goals");

  //tell the action client that we want to spin a thread by default
  MoveBaseClient ac("move_base", true);

  // Wait 5 sec for move_base action server to come up
  while(!ac.waitForServer(ros::Duration(5.0))){
    ROS_INFO("Waiting for the move_base action server to come up");
  }

  move_base_msgs::MoveBaseGoal goal;

  // set up the frame parameters
  goal.target_pose.header.frame_id = "base_link";
  goal.target_pose.header.stamp = ros::Time::now();

  // Define a position and orientation for the robot to reach
  goal.target_pose.pose.position.x = 1.0;
  goal.target_pose.pose.orientation.w = 1.0;

   // Send the goal position and orientation for the robot to reach
  ROS_INFO("Sending goal");
  ac.sendGoal(goal);

  // Wait an infinite time for the results
  ac.waitForResult();

  // Check if the robot reached its goal
  if(ac.getState() == actionlib::SimpleClientGoalState::SUCCEEDED)
    ROS_INFO("Hooray, the base moved 1 meter forward");
  else
    ROS_INFO("The base failed to move forward 1 meter for some reason");

  return 0;
}

custom code

This code needs to be modified and its node name edited to pick_objects. Then, edit frame_id to map, because fixed frame is map not base_link. Then modify the code to include an additional target position and orientation for the robot to reach. The first target is the pickup area and the second target is the unloading area. The robot must move to the desired pickup area, display a message that it has reached its destination, wait 5 seconds, move to the desired unload area, and display a message that it has reached the unload area.

NOTE: Achieving Multiple Goals

Follow the instructions below to automatically command the robot to the desired pickup and unloading areas:

  Example: pick_objects.cpp

#include <ros/ros.h>
#include <move_base_msgs/MoveBaseAction.h>
#include <actionlib/client/simple_action_client.h>

// Define a client for to send goal requests to the move_base server through a SimpleActionClient
typedef actionlib::SimpleActionClient<move_base_msgs::MoveBaseAction> MoveBaseClient;

int main(int argc, char** argv){
  // Initialize the simple_navigation_goals node
  ros::init(argc, argv, "simple_navigation_goals");

  //tell the action client that we want to spin a thread by default
  MoveBaseClient ac("move_base", true);

  // Wait 5 sec for move_base action server to come up
  while(!ac.waitForServer(ros::Duration(5.0))){
    ROS_INFO("Waiting for the move_base action server to come up");
  }

  move_base_msgs::MoveBaseGoal goal;

  // set up the frame parameters
  goal.target_pose.header.frame_id = "map";
  goal.target_pose.header.stamp = ros::Time::now();

  // Define a position and orientation for the robot to reach
  goal.target_pose.pose.position.x = 2.0;
  goal.target_pose.pose.position.y = 3.0;
  goal.target_pose.pose.orientation.w = 1.0;

   // Send the goal position and orientation for the robot to reach
  ROS_INFO("Sending 1st goal");
  ac.sendGoal(goal);

  // Wait an infinite time for the results
  ac.waitForResult();

  // Check if the robot reached its goal
  if(ac.getState() == actionlib::SimpleClientGoalState::SUCCEEDED)
  {
    ROS_INFO("Hooray, the base moved to the 1st Goal");
  }
  else
    ROS_INFO("The base failed to move to the 1st Goal");

  if(ac.getState() == actionlib::SimpleClientGoalState::SUCCEEDED)
  {
    ros::Duration(5).sleep();

    goal.target_pose.header.frame_id = "map";
    goal.target_pose.header.stamp = ros::Time::now();

    goal.target_pose.pose.position.x = -2.0;
    goal.target_pose.pose.position.y = 1.0;
    goal.target_pose.pose.orientation.w = 1.0;

    ROS_INFO("Sending 2nd goal");
    ac.sendGoal(goal);

    ac.waitForResult();

    if(ac.getState() == actionlib::SimpleClientGoalState::SUCCEEDED)
  {
    ROS_INFO("Hooray, the base moved to the 2nd Goal");
  }
  else
    ROS_INFO("The base failed to move to the 2nd Goal");


  }

  return 0;
}

CMakeLists .txt:

cmake_minimum_required(VERSION 2.8.3)
project(pick_objects)

## Compile as C++11, supported in ROS Kinetic and newer
# add_compile_options(-std=c++11)

## Find catkin macros and libraries
## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz)
## is used, also find other catkin packages
find_package(catkin REQUIRED COMPONENTS
  actionlib
  move_base_msgs
  roscpp
)

## System dependencies are found with CMake's conventions
# find_package(Boost REQUIRED COMPONENTS system)


## Uncomment this if the package has a setup.py. This macro ensures
## modules and global scripts declared therein get installed
## See http://ros.org/doc/api/catkin/html/user_guide/setup_dot_py.html
# catkin_python_setup()

################################################
## Declare ROS messages, services and actions ##
################################################

## To declare and build messages, services or actions from within this
## package, follow these steps:
## * Let MSG_DEP_SET be the set of packages whose message types you use in
##   your messages/services/actions (e.g. std_msgs, actionlib_msgs, ...).
## * In the file package.xml:
##   * add a build_depend tag for "message_generation"
##   * add a build_depend and a exec_depend tag for each package in MSG_DEP_SET
##   * If MSG_DEP_SET isn't empty the following dependency has been pulled in
##     but can be declared for certainty nonetheless:
##     * add a exec_depend tag for "message_runtime"
## * In this file (CMakeLists.txt):
##   * add "message_generation" and every package in MSG_DEP_SET to
##     find_package(catkin REQUIRED COMPONENTS ...)
##   * add "message_runtime" and every package in MSG_DEP_SET to
##     catkin_package(CATKIN_DEPENDS ...)
##   * uncomment the add_*_files sections below as needed
##     and list every .msg/.srv/.action file to be processed
##   * uncomment the generate_messages entry below
##   * add every package in MSG_DEP_SET to generate_messages(DEPENDENCIES ...)

## Generate messages in the 'msg' folder
# add_message_files(
#   FILES
#   Message1.msg
#   Message2.msg
# )

## Generate services in the 'srv' folder
# add_service_files(
#   FILES
#   Service1.srv
#   Service2.srv
# )

## Generate actions in the 'action' folder
# add_action_files(
#   FILES
#   Action1.action
#   Action2.action
# )

## Generate added messages and services with any dependencies listed here
# generate_messages(
#   DEPENDENCIES
#   move_base_msgs
# )

################################################
## Declare ROS dynamic reconfigure parameters ##
################################################

## To declare and build dynamic reconfigure parameters within this
## package, follow these steps:
## * In the file package.xml:
##   * add a build_depend and a exec_depend tag for "dynamic_reconfigure"
## * In this file (CMakeLists.txt):
##   * add "dynamic_reconfigure" to
##     find_package(catkin REQUIRED COMPONENTS ...)
##   * uncomment the "generate_dynamic_reconfigure_options" section below
##     and list every .cfg file to be processed

## Generate dynamic reconfigure parameters in the 'cfg' folder
# generate_dynamic_reconfigure_options(
#   cfg/DynReconf1.cfg
#   cfg/DynReconf2.cfg
# )

###################################
## catkin specific configuration ##
###################################
## The catkin_package macro generates cmake config files for your package
## Declare things to be passed to dependent projects
## INCLUDE_DIRS: uncomment this if your package contains header files
## LIBRARIES: libraries you create in this project that dependent projects also need
## CATKIN_DEPENDS: catkin_packages dependent projects also need
## DEPENDS: system dependencies of this project that dependent projects also need
catkin_package(
#  INCLUDE_DIRS include
#  LIBRARIES pick_objects
#  CATKIN_DEPENDS actionlib move_base_msgs roscpp
#  DEPENDS system_lib
)

###########
## Build ##
###########

## Specify additional locations of header files
## Your package locations should be listed before other locations
include_directories(
# include
  ${catkin_INCLUDE_DIRS}
)

## Declare a C++ library
# add_library(${PROJECT_NAME}
#   src/${PROJECT_NAME}/pick_objects.cpp
# )

## Add cmake target dependencies of the library
## as an example, code may need to be generated before libraries
## either from message generation or dynamic reconfigure
# add_dependencies(${PROJECT_NAME} ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})

## Declare a C++ executable
## With catkin_make all packages are built within a single CMake context
## The recommended prefix ensures that target names across packages don't collide
add_executable(${PROJECT_NAME} src/pick_objects.cpp)

## Rename C++ executable without prefix
## The above recommended prefix causes long target names, the following renames the
## target back to the shorter version for ease of user use
## e.g. "rosrun someones_pkg node" instead of "rosrun someones_pkg someones_pkg_node"
# set_target_properties(${PROJECT_NAME}_node PROPERTIES OUTPUT_NAME node PREFIX "")

## Add cmake target dependencies of the executable
## same as for the library above
# add_dependencies(${PROJECT_NAME}_node ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})

## Specify libraries to link a library or executable target against
target_link_libraries(${PROJECT_NAME}
   ${catkin_LIBRARIES}
 )

#############
## Install ##
#############

# all install targets should use catkin DESTINATION variables
# See http://ros.org/doc/api/catkin/html/adv_user_guide/variables.html

## Mark executable scripts (Python etc.) for installation
## in contrast to setup.py, you can choose the destination
# install(PROGRAMS
#   scripts/my_python_script
#   DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
# )

## Mark executables for installation
## See http://docs.ros.org/melodic/api/catkin/html/howto/format1/building_executables.html
# install(TARGETS ${PROJECT_NAME}_node
#   RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
# )

## Mark libraries for installation
## See http://docs.ros.org/melodic/api/catkin/html/howto/format1/building_libraries.html
# install(TARGETS ${PROJECT_NAME}
#   ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
#   LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
#   RUNTIME DESTINATION ${CATKIN_GLOBAL_BIN_DESTINATION}
# )

## Mark cpp header files for installation
# install(DIRECTORY include/${PROJECT_NAME}/
#   DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION}
#   FILES_MATCHING PATTERN "*.h"
#   PATTERN ".svn" EXCLUDE
# )

## Mark other files for installation (e.g. launch and bag files, etc.)
# install(FILES
#   # myfile1
#   # myfile2
#   DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
# )

#############
## Testing ##
#############

## Add gtest based cpp test target and link libraries
# catkin_add_gtest(${PROJECT_NAME}-test test/test_pick_objects.cpp)
# if(TARGET ${PROJECT_NAME}-test)
#   target_link_libraries(${PROJECT_NAME}-test ${PROJECT_NAME})
# endif()

## Add folders to be run by python nosetests
# catkin_add_nosetests(test)

pick_objects.sh :

#!/bin/sh

path_catkin_ws="/home/workspace/catkin_ws"

xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && roslaunch my_robotb world.launch" &

sleep 8
  
xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && roslaunch my_robotb amcl.launch " &

sleep 5

xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && roslaunch turtlebot_rviz_launchers view_navigation.launch" &

sleep 5

xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && rosrun pick_objects pick_objects"

7. Virtual objects

Building virtual objects
The final task of this project is to simulate objects with markers in rviz. The virtual object is an object picked up by the robot and delivered, so it should first appear in the pickup area, and again once the robot reaches the unloading area.

First, let's see how to draw markers in rviz. There is an official ROS tutorial that teaches you how to do this, this tutorial is a good reference, it contains a C++ node to draw basic shapes like arrows, cubes, cylinders and spheres in rviz. Learn how to define a marker, scale it, define its position and orientation, and finally publish it to rviz. The nodes included in the tutorial will post a different shape every second in the same position and orientation. Check out the tutorial ( rviz/Tutorials/Markers: Basic Shapes - ROS Wiki ) .

First run this node and visualize the markers in rviz, then, modify the code and post a single shape example: cube. The code follows the following algorithm:

  • Post an item in the pickup area

  • Pause for 5 seconds

  • Hidden items (items disappear)

  • Pause for 5 seconds

  • Release items in the unloading area

Later, combine this node with the previously coded pick_objects node to simulate a complete home service robot.

Example: add_marker.sh

#!/bin/sh

path_catkin_ws="/home/workspace/catkin_ws"

xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && roslaunch my_robotb world.launch" &

sleep 8
  
xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && roslaunch my_robotb amcl.launch " &

sleep 5

xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && roslaunch turtlebot_rviz_launchers view_navigation.launch" &

sleep 5

xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && rosrun add_markers add_markers"

8. Home Service Robot
Now it's time to simulate a full-fledged home service robot capable of navigating, picking up and delivering virtual items. For this, the add_markers node and the pick_objects node should communicate with each other. Or, more precisely, the add_markers node should subscribe to your odometry to track your robot pose.

Modify the add_markers node as follows:

  • Initially display items in the pickup area

  • Items hide/disappear when bot reaches pickup area

  • Wait for 5 seconds to simulate pickup

  • Display items in the unloading area when the robot reaches the unloading area

Putting all the code together, follow these steps to simulate a home service robot:

Example: add_markers.cpp 

# include <ros/ros.h>
# include <visualization_msgs/Marker.h>
# include <nav_msgs/Odometry.h>
# include <math.h>

float pickup_x = 2.0;
float pickup_y = 3.0;
float dropoff_x = -2.0;
float dropoff_y = 1.0;

bool pick_up = false;
bool drop_off = false;

float tolerance = 0.6;

void odometry_cb(const nav_msgs::Odometry::ConstPtr& msg)
{
	float robot_pose_x = msg->pose.pose.position.x;
	float robot_pose_y = msg->pose.pose.position.y;

	float pickup_dis;
	float dropoff_dis;

	if(!pick_up && !drop_off)
	{
		pickup_dis = sqrt(pow((pickup_x - robot_pose_x),2) + pow((pickup_y - robot_pose_y),2));
		ROS_INFO("Distance to pickup zone = f%",pickup_dis);

		if(pickup_dis <= tolerance)
		{
			ROS_INFO("Arrived at the pickup zone");	
			pick_up = true;
		}
	}
	if(pick_up && !drop_off)
	{
		dropoff_dis = sqrt(pow((dropoff_x - robot_pose_x),2) + pow((dropoff_y - robot_pose_y),2));
		ROS_INFO("Distance to dropoff zone = f%",dropoff_dis);

		if(dropoff_dis <= tolerance)
		{
			ROS_INFO("Arrived at the dropoff zone");	
			drop_off = true;
		}
	}
}


int main(int argc,char** argv)
{
	ros::init(argc,argv,"add_markers");
	ros::NodeHandle n;
	ros::Rate r(1);
	ros::Publisher marker_pub = n.advertise<visualization_msgs::Marker>("visualization_marker",1);

	ros::Subscriber odometry_sub =n.subscribe("/odom",1000,odometry_cb);

	uint32_t shape = visualization_msgs::Marker::CUBE;


	visualization_msgs::Marker marker;


	marker.header.frame_id = "map";
	marker.header.stamp = ros::Time::now();

	marker.ns = "basic_shapes";
	marker.id = 0;


	marker.type = shape;

	marker.action = visualization_msgs::Marker::ADD;


	marker.scale.x = 1.0;
	marker.scale.y = 1.0;
	marker.scale.z = 1.0;


	marker.color.r = 0.0f;
	marker.color.g = 1.0f;
	marker.color.b = 0.0f;
	marker.color.a = 1.0f;


	marker.pose.position.x = pickup_x;
	marker.pose.position.y = pickup_y;
	marker.pose.position.z = 0;
	marker.pose.orientation.x = 0.0;
	marker.pose.orientation.y = 0.0;
	marker.pose.orientation.z = 0.0;
	marker.pose.orientation.w = 1.0;


	marker.lifetime = ros::Duration();


	while(ros::ok())
	{
		while(marker_pub.getNumSubscribers()<1)
		{
			if(!ros::ok())	
			{
				return 0;
			}
			ROS_WARN_ONCE("Please create a subscriber to the marker");
			sleep(1);
		}

		if(pick_up)
		{
			marker.action = visualization_msgs::Marker::DELETE;
			ROS_INFO("Object was picked up");
			ros::Duration(2.0).sleep();
		}

		if(drop_off)
		{
			marker.pose.position.x = dropoff_x;
			marker.pose.position.y = dropoff_y;
			marker.action = visualization_msgs::Marker::ADD;
			ROS_INFO("Object was dropped off");
			ros::Duration(2.0).sleep();
		}

		marker_pub.publish(marker);

		ros::spinOnce();
	}

	return 0;
}

Example: home_service.sh 

#!/bin/sh

path_catkin_ws="/home/workspace/catkin_ws"

xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && roslaunch my_robotb world.launch" &

sleep 8
  
xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && roslaunch my_robotb amcl.launch" &

sleep 5

xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && roslaunch add_markers view_home_service_navigation.launch rviz_path:=${path_catkin_ws}/src/rvizConfig/home_service.rviz"&

sleep 5

xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && rosrun add_markers add_markers"&

sleep 5

xterm  -e  " cd ${path_catkin_ws} && source devel/setup.bash && rosrun pick_objects pick_objects"

Note that
there are multiple ways to implement project requirements. To establish communication between the robot and the item, one approach mentioned earlier is to have the add_markers node subscribe to the robot odometry and keep track of the robot pose.

Other solutions to this problem could be to use ROS parameters, subscribe to the AMCL pose, or even publish a new variable to indicate whether the robot is in the pick-up or drop-off zone. Feel free to solve this problem in any way you want.

## Run the project

* Navigate to the "src" folder, clone the necessary repository

cd /home/workspace/catkin_ws/src 
git clone https://github.com/ros-perception/slam_gmapping.git 
git clone https://github.com/turtlebot/turtlebot.git 
git clone https://github.com/turtlebot/turtlebot_interactions.git 
git clone https://github.com/turtlebot/turtlebot_simulator.git 

* Open the repository, compile the code

cd /home/workspace/catkin_ws/
catkin_make
source devel/setup.bash

* Start a home service robot

./src/scripts/home_service.sh

# #hint

1. It is recommended to update and upgrade the environment before running the code.

sudo apt-get update && sudo apt-get upgrade -y

2. How to visualize virtual objects in rviz

To view the marker (virtual object) demo, in addition to running ' ./add_marker.sh ', you also need to manually add a ' marker ' in rviz, the steps are as follows:

* Find the rviz window

*In the lower left panel, click the "Add" button

*In the "By Display Type" tab, navigate to "rviz", then "Marker"

*Click the "OK" button

* Once done, you should see the marker (dummy) appear, disappear, and then reappear


 

Guess you like

Origin blog.csdn.net/jeffliu123/article/details/130049156