[ROS Robotic Arm Getting Started Tutorial]

First of all, let me declare that this project is based on the [ROS Robotic Arm Introduction Tutorial] of Kazakhstan UP at station B , and reproduced in the early stage [Robotic Arm Vision Grabbing from Theory to Practical Combat].
This content is the phased achievement display and technology of his graduate career Share, all data and code are open source. So Pengpeng, I hereby reproduce it again. The hardware I use is different from it. The UP master uses UR5, and my laboratory uses UR3. The relevant materials UR3CB3.12 are listed below: https
insert image description here: //www.universal -robots.cn/cb3/
[UR3 system upgrade to CB3.12 with URcap1.05]

hardware support

serial number name Function
1 UR3 robotic arm executor of all actions
2 D435i executor's eyes
3 Ubuntu deep learning environment executive thinking brain
4 Plane Grabbing Platform Experimental operating environment
5 6*6 black and white checkerboard calibration board Calibration of the camera eye outside the hand
6 3x3x3cm 3D printing small cube artifact actor

The code supports
the ros robotic arm entry tutorial code version3.0 link:
link: https://pan.baidu.com/s/1KFLQXVWShG5KfroCd6eM0A=8888
extraction code: 8888
[ROS robotic arm entry tutorial - small five] PPT
link: https:/ /pan.baidu.com/s/18ierKnf8OJPPvGn8uOf1hw=8888
Extraction code: 8888

1 Overview

insert image description here

insert image description here

[Autolabor Primary Tutorial] Getting Started with ROS Robots

Robot Operating System ROS Quick Start Tutorial

insert image description here
insert image description here

2. Introduction to ROS

insert image description here

2.1 Introduction to ROS

ROS official website: http://wiki.ros.org/
insert image description here

insert image description hereinsert image description hereinsert image description hereautolabor address: http://www.autolabor.com.cn/book/ROSTutorials/

insert image description hereinsert image description here
insert image description here
insert image description here
Moveit api:https://moveit.ros.org/documentation/source-code-api/

insert image description here

melodic:http://docs.ros.org/en/melodic/api/moveit_tutorials/html/index.html

insert image description here

2.2 ROS robot packageinsert image description here

insert image description hereros-melodic:git clone https://github.com/ros-industrial/universal_robot.git
ros-noetic:https://blog.csdn.net/Dawn_yc/article/details/114791755

3. Robot URDF modeling

insert image description here

3.1 Introduction to URDF

insert image description here

URDF is the description format of the robot model in ROS, including the description of the robot rigid body appearance, physical properties, joint types, etc.

insert image description here
insert image description here

3.2 View model

insert image description here
$ check_urdf xxx.urdf Check the xxx.urdf file and find an error
$ urdf_to_graphiz xxx.urdf
Generate a pdf file from the urdf file for visualization
insert image description here
$ roslaunch ur_description view_ur5.launch Display the ur5 modelinsert image description here

3.3 Configure and save Rviz

insert image description hereroslaunch ur_description view_ur5_with_gripper.launch Exhibition ur5+robotiq85 model
insert image description here

insert image description here

roslaunch ur_description view_ur5.launch
rviz

The rviz save path is inuniversal_robot/ur_description/cfg/2222.rviz

<?xml version="1.0"?>
<launch>
  <include file="$(find ur_description)/launch/ur5_upload.launch"/>

  <node name="joint_state_publisher" pkg="joint_state_publisher" type="joint_state_publisher" >
    <param name="use_gui" value="true"/>
  </node>
  <node name="robot_state_publisher" pkg="robot_state_publisher" type="state_publisher" />
  <node name="rviz" pkg="rviz" type="rviz" args="-d $(find ur_description)/cfg/2222.rviz" required="true" />
</launch>

insert image description here

4. Moveit! core function introduction and Rviz control

4.1 Introduction to Moveit!

insert image description hereMoveIt! is a tool set software related to the robotic arm, which
integrates various functions:
Ø Kinematics
Ø Motion Planning
Ø Collision Checking
Ø 3D Perception
Ø Manipulation
insert image description here
insert image description here

4.2 Robot ROS package architecture

insert image description here
insert image description hererosrun moveit_setup_assistant moveit_setup_assistant
Load the urdf model
insert image description hereto check if there is a collision between the joints

insert image description here
Select the solver,
insert image description here
select the reference connection and end solver,
insert image description hereinsert image description here
configure the relevant posture position,
insert image description here
lie flat and all zero
insert image description here
up posture 0,-1.5708,0-1.5708,0,0

insert image description hereConfigure the end gripper
insert image description here

Fill in the personal information
insert image description hereand finally create the model
insert image description here

Import model directly
insert image description here

4.3 Computer settings

insert image description hereThere is a corresponding Rviz demo after the moveit configuration

roslaunch ur5_moveit_config demo.launch

insert image description hereinsert image description here

insert image description hereRandom target position, click plan&execute, you can move to the corresponding position
insert image description here
Add obstacle
insert image description heregripper demo

roslaunch ur5_gripper_moveit_config demo.launch

insert image description here

5. Gazebo simulates or controls a real robot

HyperTerminal

sudo apt install terminator

5.1 Connection between Moveit and controller

insert image description hereinsert image description hereHow to use

# gazebo仿真中控制机器人
roslaunch ur_gazebo ur5.launch
roslaunch ur5_moveit_config ur5_moveit_planning_execution.launch sim:=true
roslaunch ur5_moveit_config moveit_rviz.launch config:=true
# 控制真实UR5机器人
一键启动: 
roslaunch ur_planning start_ur5.launch
分开启动:
roslaunch ur_modern_driver ur5_bringup.launch
roslaunch ur5_moveit_config ur5_moveit_planning_execution.launch
roslaunch ur5_moveit_config moveit_rviz.launch config:=true
# rviz 仅作可视化显示
roslaunch ur5_moveit_config demo.launch
rosrun ur_planning moveitServer.py

5.2 Gazebo simulation

insert image description hereinsert image description here

5.3 Control the real UR5 robot

one-button start:

roslaunch ur_planning start_ur5.launch

Start separately:

roslaunch ur_modern_driver ur5_bringup.launch
roslaunch ur5_moveit_config ur5_moveit_planning_execution.launch
roslaunch ur5_moveit_config moveit_rviz.launch config:=true

insert image description here

6. Moveit basics (python)

insert image description here

6.1 Moveit! Review

insert image description here

6.2 Joint space movement

Install moveit_commander in Python

sudo apt-get update
sudo apt-get install ros-melodic-moveit-commander

insert image description here

# 关节规划,输入6个关节角度(单位:弧度)
    def move_j(self, joint_configuration=None,a=1,v=1):
        # 设置机械臂的目标位置,使用六轴的位置数据进行描述(单位:弧度)
        if joint_configuration==None:
            joint_configuration = [0, -1.5707, 0, -1.5707, 0, 0]
        self.arm.set_max_acceleration_scaling_factor(a)
        self.arm.set_max_velocity_scaling_factor(v)
        self.arm.set_joint_value_target(joint_configuration)
        rospy.loginfo("move_j:"+str(joint_configuration))
        self.arm.go()
        rospy.sleep(1)

insert image description here

    # 空间规划,输入xyzRPY
    def move_p(self, tool_configuration=None,a=1,v=1):
        if tool_configuration==None:
            tool_configuration = [0.3,0,0.3,0,-np.pi/2,0]
        self.arm.set_max_acceleration_scaling_factor(a)
        self.arm.set_max_velocity_scaling_factor(v)

        target_pose = PoseStamped()
        target_pose.header.frame_id = self.reference_frame
        target_pose.header.stamp = rospy.Time.now()
        target_pose.pose.position.x = tool_configuration[0]
        target_pose.pose.position.y = tool_configuration[1]
        target_pose.pose.position.z = tool_configuration[2]
        q = quaternion_from_euler(tool_configuration[3],tool_configuration[4],tool_configuration[5])
        target_pose.pose.orientation.x = q[0]
        target_pose.pose.orientation.y = q[1]
        target_pose.pose.orientation.z = q[2]
        target_pose.pose.orientation.w = q[3]

        self.arm.set_start_state_to_current_state()
        self.arm.set_pose_target(target_pose, self.end_effector_link)
        rospy.loginfo("move_p:" + str(tool_configuration))
        traj = self.arm.plan()
        self.arm.execute(traj)
        rospy.sleep(1)

6.3 Cartesian Space Motion

insert image description here

# 空间直线运动,输入(x,y,z,R,P,Y,x2,y2,z2,R2,...)
    # 默认仅执行一个点位,可以选择传入多个点位
    def move_l(self, tool_configuration,waypoints_number=1,a=0.5,v=0.5):
        if tool_configuration==None:
            tool_configuration = [0.3,0,0.3,0,-np.pi/2,0]
        self.arm.set_max_acceleration_scaling_factor(a)
        self.arm.set_max_velocity_scaling_factor(v)

        # 设置路点
        waypoints = []
        for i in range(waypoints_number):
            target_pose = PoseStamped()
            target_pose.header.frame_id = self.reference_frame
            target_pose.header.stamp = rospy.Time.now()
            target_pose.pose.position.x = tool_configuration[6*i+0]
            target_pose.pose.position.y = tool_configuration[6*i+1]
            target_pose.pose.position.z = tool_configuration[6*i+2]
            q = quaternion_from_euler(tool_configuration[6*i+3],tool_configuration[6*i+4],tool_configuration[6*i+5])
            target_pose.pose.orientation.x = q[0]
            target_pose.pose.orientation.y = q[1]
            target_pose.pose.orientation.z = q[2]
            target_pose.pose.orientation.w = q[3]
            waypoints.append(target_pose.pose)
        rospy.loginfo("move_l:" + str(tool_configuration))
        self.arm.set_start_state_to_current_state()
        fraction = 0.0  # 路径规划覆盖率
        maxtries = 100  # 最大尝试规划次数
        attempts = 0  # 已经尝试规划次数

        # 设置机器臂当前的状态作为运动初始状态
        self.arm.set_start_state_to_current_state()

        # 尝试规划一条笛卡尔空间下的路径,依次通过所有路点
        while fraction < 1.0 and attempts < maxtries:
            (plan, fraction) = self.arm.compute_cartesian_path(
                waypoints,  # waypoint poses,路点列表
                0.001,  # eef_step,终端步进值
                0.00,  # jump_threshold,跳跃阈值
                True)  # avoid_collisions,避障规划
            attempts += 1
        if fraction == 1.0:
            rospy.loginfo("Path computed successfully. Moving the arm.")
            self.arm.execute(plan)
            rospy.loginfo("Path execution complete.")
        else:
            rospy.loginfo(
                "Path planning failed with only " + str(fraction) +
                " success after " + str(maxtries) + " attempts.")
        rospy.sleep(1)

6.4 Interacting with the environment

insert image description here

# 在机械臂下方添加一个table,使得机械臂只能够在上半空间进行规划和运动
    # 避免碰撞到下方的桌子等其他物体
    def set_scene(self):
        ## set table
        self.scene = PlanningSceneInterface()
        self.scene_pub = rospy.Publisher('planning_scene', PlanningScene, queue_size=5)
        self.colors = dict()
        rospy.sleep(1)
        table_id = 'table'
        self.scene.remove_world_object(table_id)
        rospy.sleep(1)
        table_size = [2, 2, 0.01]
        table_pose = PoseStamped()
        table_pose.header.frame_id = self.reference_frame
        table_pose.pose.position.x = 0.0
        table_pose.pose.position.y = 0.0
        table_pose.pose.position.z = -table_size[2]/2 -0.02
        table_pose.pose.orientation.w = 1.0
        self.scene.add_box(table_id, table_pose, table_size)
        self.setColor(table_id, 0.5, 0.5, 0.5, 1.0)
        self.sendColors()

7. Moveit foundation (C++) to achieve path planning with constraints

insert image description here

8. Motion Planning - How to choose the planning algorithm in Moveit?

8.1 Why path planning is needed

Ø Obstacle avoidance: avoid collisions with static objects near the robotic arm such as tables; avoid
collisions with dynamic objects such as (suddenly approaching) people
Ø The task has requirements on the path of motion: with kinematics or dynamics constraints, such as welding , Grab
a cup filled with water , etc.
n Path planning classification
Ø Based on search, Dijkstra, A*, Anytime A*, ARA*, D*
Ø Based on sampling, PRM, RRT, RRT-connect, RRT*, Kinodynamic-
RRT* (Conforming to dynamics), Anytime RRT*, Informed RRT*
Ø Intelligent algorithms such as genetic algorithm, ant colony algorithm
insert image description hereSampling-based path planning algorithm
Ø Basic knowledge: complete introduction: the solution will be found when the time is infinite;
Ø PRM: 1. Random first Sampling n points to form Figure 2. Searching based on graphs
Ø RRT: fast search for random trees, non-optimal solutions; querying near can use kdtree and other data structures to
quickly find the nearest node in the tree
Ø RRT-connect: bidirectionally extended RRT, Expand from start and goal at the same time, and the search speed is relatively
fast. The default algorithm of ros-moveit
Ø RRT*: Two improvements have been made, one is to change the rules for connecting new nodes to the tree, and the other is to "prune" the
search tree ” operation to make it closer to the real optimal path
insert image description here

insert image description here

insert image description here
insert image description here

8.2 How to choose a path planning algorithm

insert image description here
insert image description here

  1. Rviz
    insert image description here
  2. Moveit_setup_assistant
    insert image description here
    insert image description here
  3. in the program

insert image description here

9. Visual obstacle avoidance

insert image description here

9.1 Introduction

insert image description here

9.2 Official demo

Ø Only one point cloud/depth map input is needed to generate a raster map;

$ roslaunch moveit_tutorials obstacle_avoidance_demo.launch

Ø Cylindrical obstacles can be generated from the grid map

$ roslaunch moveit_tutorials detect_and_add_cylinder_collision_object_demo.launch

(Note: When running this statement, you need to close the previous command)
Function package:

https://github.com/ros-planning/moveit_tutorials
https://github.com/ros-planning/panda_moveit_config

insert image description here

9.3 Actual Combat

roslaunch ur5_moveit_config demo_with_obstacle_avoidance.launch

insert image description here
insert image description here
Main steps
Ø 1. Start the camera ros program

$ sudo apt-get install ros-melodic-realsense2-camera
$ sudo apt-get install ros-melodic-realsense2-description
$ roslaunch realsense2_camera demo_pointcloud.launch

Ø 2. Modify the moveit configuration file
(1) modify the point_cloud_topic of xxx_moveit_config/config/sensors_kinect_pointcloud.yaml to be
the topic published by the point cloud of your own camera
(2) or modify the image_topic of xxx_moveit_config/config/sensors_kinect_depthmap.yaml
to be the topic published by your own camera depth map
Ø 3 (Optional) Modify visual obstacle avoidance parameters, such as grid resolution octomap_resolution, etc.
Ø 4 (Optional) Modify camera parameters, such as resolution, update frequency, etc.
Ø 5. Publish the pose of the camera relative to the base coordinates of the manipulator

<node pkg="tf2_ros" type="static_transform_publisher"
name="camera_to_robot" args="0.72032458 -0.21936176 0.90925255
0.98422218 0.0969083 -0.04779138 0.14011233 camera_color_optical_frame
world" />

insert image description here

roslaunch ur5_moveit_config demo_with_obstacle_avoidance.launch

Reference https://blog.csdn.net/ssw_1990/article/details/104053041
insert image description hereReference https://blog.csdn.net/ssw_1990/article/details/104053041
insert image description here

10. ROS and Deep Learning

10.1 Introduction

ROS-Moveit!

ROS-melodic以及之前版本Moveit!默认使用python2.7
ROS-noetic以及ROS2支持python3

Deep learning
Deep learning uses python3, and deep learning generally needs to configure the environment in a virtual environment.
How to integrate deep learning into ROS?
insert image description here
insert image description here

10. 2 Actual Combat

n Coexistence of ROS and Anaconda
Ø 1. One-click installation of ROS

$ wget http://fishros.com/install -O fishros && bash fishros

Ø 2. Install anaconda
https://blog.csdn.net/KIK9973/article/details/118772450
Baidu search for the anaconda installation tutorial corresponding to the ubuntu version
Ø 3. Create a virtual environment in anaconda

conda create -n YOUR_ENV_NAME
pyhton=3.7

Ø 4. Install the ros library in the virtual environment

$ conda activate
$ pip install rospkg/rospy

Note: Regardless of whether it is a virtual environment or not, as long as rospkg is installed, you will be able to find ros packages such as rospy,
std_msgs, etc., so that topic communication and service communication can be performed.
Ø 5. Encapsulate the moveit function into a server (Lecture 6 - Moveit Basic-python/ros service communication)

$ rosrun ur_planning moveitServer.py

Ø 6. Write a client in the script in the environment, and connect with the moveit function through service communication
insert image description here
insert image description here

11. Realize 6-DOF visual capture based on ROS-Moveit

11.1 Visual grabbing

  1. Plane capture
    Ø Cornell dataset: 1035 RGBD images, 8019 labels
    Ø Jacquard dataset: 54K RGBD images, 1.1 million labels
    Ø Classic plane capture algorithm: GGCNN[1], GRCNN[2],
    Swin-Transformer[3] ]
  2. Six degrees of freedom to grab
    the GraspNet-1Billion dataset

insert image description here
insert image description here

11. System framework design

n Motion planning
Ø Function: Arrive at
the position and attitude specified by the algorithm
Ø Path planning: Plan a collision-free trajectory from the current position
to the expected position n Hand-eye calibration Ø Function: Obtain the relationship between the robot coordinate system and the camera coordinate system Ø Effect: Currently available Achieve error within ±5mm




insert image description here
insert image description here

11. 3 Crawling experiment

n graspness[6] Single object grasping experiment
List of grasped items:
11 objects in total: 1. Apple 2. Banana 3. Flashlight 4. Baseball 5. Shoe brush
6. Badminton box 7. Radio 8. Down jacket cleaner
9. Small white shoe cleaner 10. U disk 11. Remote control
insert image description here

11.3 Crawling experiment

n Graspness[6] complex scene continuous grasping experiment
Ø Conclusion: Compared with other algorithms, grasping pose prediction results of grasping[6] are more accurate;
grasping can also be detected for most small objects and objects with the same color as the background Attitude, the robustness is stronger.
Ø Existing problems:
l Objects that are too "flat" and low in height make it difficult to detect the grasping posture
l During the grasping process, collisions with other objects may occur
insert image description here
insert image description here
insert image description here

Guess you like

Origin blog.csdn.net/vor234/article/details/131633464