ROS-based Learning ----- ubuntu16.04 Gazebo robot simulation environment built sensor (camera, kinect, Lidar), and display data Rviz


Gazebo how to use environmental structures, is embedded learning we must, but also among the heavy, so we need to focus grasp Oh!
Last ROS Lin Jun seniors learning blog explains how to build our gazebo space environment to build in ROS environment, which is the first step in learning gazebo, entry operation, this blog, we will conduct more in-depth Gazebo robot environment build three sensors built environment: camera, kinect, Lidar

version: ubuntu16.04

First, before the Gazebo environment to build ready

1, will be downloaded to build the model under our gazebo environment

In order to be able to complete the follow-up of our gazebo load simulation model is created, we will be the best gazebo model downloaded to the local, not the model created by the network, because access to the foreign site, so very slow!
1) into the gazebo hidden folder

cd ~/.gazebo/

2) create our models folder in the file directory

mkdir -p models

3), into the folder

cd ~/.gazebo/models/

4), download model file

wget http://file.ncnynl.com/ros/gazebo_models.txt
wget -i gazebo_models.txt

5), extract the files to our model file directory folder

ls model.tar.g* | xargs -n1 tar xzvf

Above is the whole process of our model download, Tips, the process is really slow, wait for it slowly, only 100 M it is very slow speed, because access to foreign sites
if need be Lin Jun private letter seniors, I downloaded well, you can send your oh! Resource is too large, the blog I can not upload resources, I upload permissions only 200M of it!

2, version upgrade our gazebo

Note: This step requires only ubuntu16.04 version of the small partners involved, ubuntu18 version without having to perform this step, because the situation does not appear incompatible!
The reason to upgrade our gazebo version, because when ubuntu16.01 automatic installation of ROS automatically downloaded version of gazebo7.0.0,This causes us to do behind the camera sensor back when an error occurs, when we run the camera sensor then enter RVIZ configuration image display, you will receive an error:
Here Insert Picture Description
Access to a lot of information, to no avail, and finally after the upgrade, you can find a good solution to this problem! Of course, this version of the user for ubuntu16.04 Oh, ubuntu18 version of ROS version and not the same as 16, can perfectly perform the next operation!
1), installationaptitudeDownloader, the download is powerful, it is recommended to use the downloader, rather thanapt-get

sudo apt-get install aptitude

2), the upgrade version of gazebo

sudo aptitude install gazebo7

Similarly, the process is a bit long, probably a few 50 M, wait for it, as shown below will be upgraded to version 7.16:
Here Insert Picture Description

These are the pre-preparation steps to build simulation environment you, now we enter into the environment model built gazeno

Second, create a configuration file parameters launch of world file

1, into the engineering package mbot_gazebo

cd ~/ros/src/mbot_gazebo/

2, create a folder worlds

mkdir -p worlds

3, into the folder

cd worlds

4. Create a file playground.world

1), file creation

touch playground.world

2), open the file, and write the following code:

gedit playground.world

Code content is too long, inconvenient to show here, we can go to my last blog Download ROS learning what we need gazebo engineering package, that there are documents we need, copy the past on the line: follows:
Here Insert Picture Description
in part as follows :
Here Insert Picture Description

5. Create a file room.world

1), file creation

touch room.world

2), open the file, and write the following code:

gedit room.world

Similarly, the code content is too long, you can go to my last blog Download ROS learning what we need gazebo engineering package, copy follows:
Here Insert Picture Description
in part as follows:
Here Insert Picture Description

Third, based on the sensor Gazebo [Camera] build environment

1, create a file to run sensor camera

1), launch the file into the project directory folder package mbot_gazebo

cd ~/ros/src/mbot_gazebo/launch

2) to create a sensor camera's launch file

touch view_mbot_with_camera_gazebo.launch

3), open the file

gedit view_mbot_with_camera_gazebo.launch

4), the write transducercameraThe model configuration statements

<launch>
    <!-- 设置launch文件的参数 -->
    <arg name="world_name" value="$(find mbot_gazebo)/worlds/playground.world"/>
    <arg name="paused" default="false"/>
    <arg name="use_sim_time" default="true"/>
    <arg name="gui" default="true"/>
    <arg name="headless" default="false"/>
    <arg name="debug" default="false"/>
    <!-- 运行gazebo仿真环境 -->
    <include file="$(find gazebo_ros)/launch/empty_world.launch">
        <arg name="world_name" value="$(arg world_name)" />
        <arg name="debug" value="$(arg debug)" />
        <arg name="gui" value="$(arg gui)" />
        <arg name="paused" value="$(arg paused)"/>
        <arg name="use_sim_time" value="$(arg use_sim_time)"/>
        <arg name="headless" value="$(arg headless)"/>
    </include>
    <!-- 加载机器人模型描述参数 -->
    <param name="robot_description" command="$(find xacro)/xacro --inorder '$(find mbot_description)/urdf/xacro/gazebo/mbot_with_camera_gazebo.xacro'" /> 
    <!-- 运行joint_state_publisher节点,发布机器人的关节状态  -->
    <node name="joint_state_publisher" pkg="joint_state_publisher" type="joint_state_publisher" ></node> 
    <!-- 运行robot_state_publisher节点,发布tf  -->
    <node name="robot_state_publisher" pkg="robot_state_publisher" type="robot_state_publisher"  output="screen" >
        <param name="publish_frequency" type="double" value="50.0" />
    </node>
    <!-- 在gazebo中加载机器人模型-->
    <node name="urdf_spawner" pkg="gazebo_ros" type="spawn_model" respawn="false" output="screen"
          args="-urdf -model mrobot -param robot_description"/> 
</launch>

2, open a terminal and run the camera's launch file

1) into the ROS workspace

cd ~/ros

2) Run the simulation camera

roslaunch mbot_gazebo view_mbot_with_camera_gazebo.launch

3), terminal run shot:
Here Insert Picture Description
4), Gazebo simulation software running Screenshot
Here Insert Picture Description

3, open Rviz, display of camera

1), the new terminal, the working space into the ROS

cd ros

2) Run Rviz

rosrun rviz rviz

Here Insert Picture Description
3) Click add-> By the display type-> Image-> the OK
Here Insert Picture Description
. 4), the configuration parameters of image
in FIG like configuration:
drop-down selection name of the
Here Insert Picture Description
image configuration is as follows:
Here Insert Picture Description
5), shows the results as follows:
Here Insert Picture Description

Fourth, the sensor based Gazebo [Kinect] build environment

1, create a file to run kinect sensor

1), launch the file into the project directory folder package mbot_gazebo

cd ~/ros/src/mbot_gazebo/launch

2) to create a launch file kinect sensor

touch view_mbot_with_kinect_gazebo.launch

3), open the file

gedit view_mbot_with_kinect_gazebo.launch

4), the write transducerkinectThe model configuration statements

<launch>

    <!-- 设置launch文件的参数 -->
    <arg name="world_name" value="$(find mbot_gazebo)/worlds/playground.world"/>
    <arg name="paused" default="false"/>
    <arg name="use_sim_time" default="true"/>
    <arg name="gui" default="true"/>
    <arg name="headless" default="false"/>
    <arg name="debug" default="false"/>

    <!-- 运行gazebo仿真环境 -->
    <include file="$(find gazebo_ros)/launch/empty_world.launch">
        <arg name="world_name" value="$(arg world_name)" />
        <arg name="debug" value="$(arg debug)" />
        <arg name="gui" value="$(arg gui)" />
        <arg name="paused" value="$(arg paused)"/>
        <arg name="use_sim_time" value="$(arg use_sim_time)"/>
        <arg name="headless" value="$(arg headless)"/>
    </include>

    <!-- 加载机器人模型描述参数 -->
    <param name="robot_description" command="$(find xacro)/xacro --inorder '$(find mbot_description)/urdf/xacro/gazebo/mbot_with_kinect_gazebo.xacro'" /> 

    <!-- 运行joint_state_publisher节点,发布机器人的关节状态  -->
    <node name="joint_state_publisher" pkg="joint_state_publisher" type="joint_state_publisher" ></node> 

    <!-- 运行robot_state_publisher节点,发布tf  -->
    <node name="robot_state_publisher" pkg="robot_state_publisher" type="robot_state_publisher"  output="screen" >
        <param name="publish_frequency" type="double" value="50.0" />
    </node>

    <!-- 在gazebo中加载机器人模型-->
    <node name="urdf_spawner" pkg="gazebo_ros" type="spawn_model" respawn="false" output="screen"
          args="-urdf -model mrobot -param robot_description"/> 

</launch>

2, open a terminal and run the kinect launch file

1) into the ROS workspace

cd ~/ros

2) Run the simulation kinect

roslaunch mbot_gazebo view_mbot_with_kinect_gazebo.launch

3), terminal run shot:
Here Insert Picture Description
4), Gazebo simulation software running Screenshot
Here Insert Picture Description

3, open Rviz, display of camera

1), the new terminal, the working space into the ROS

cd ros

2) Run Rviz

rosrun rviz rviz

Here Insert Picture Description
3) Click add-> By the display type-> Image-> the OK
Here Insert Picture Description
. 4), the configuration parameters of image
in FIG like configuration:
drop-down selection name of the
Here Insert Picture Description
image configuration is shown below, where the drop-down selection:
Here Insert Picture Description
5), shows the results of below:
Here Insert Picture Description

V. Gazebo based sensors [the Lidar point cloud] build environment

1, create a file to run Lidar sensor

1), launch the file into the project directory folder package mbot_gazebo

cd ~/ros/src/mbot_gazebo/launch

2) Create a launch sensor Lidar file

touch view_mbot_with_laser_gazebo.launch

3), open the file

gedit view_mbot_with_laser_gazebo.launch

4), the write transducerlaserThe model configuration statements

<launch>

    <!-- 设置launch文件的参数 -->
    <arg name="world_name" value="$(find mbot_gazebo)/worlds/playground.world"/>
    <arg name="paused" default="false"/>
    <arg name="use_sim_time" default="true"/>
    <arg name="gui" default="true"/>
    <arg name="headless" default="false"/>
    <arg name="debug" default="false"/>

    <!-- 运行gazebo仿真环境 -->
    <include file="$(find gazebo_ros)/launch/empty_world.launch">
        <arg name="world_name" value="$(arg world_name)" />
        <arg name="debug" value="$(arg debug)" />
        <arg name="gui" value="$(arg gui)" />
        <arg name="paused" value="$(arg paused)"/>
        <arg name="use_sim_time" value="$(arg use_sim_time)"/>
        <arg name="headless" value="$(arg headless)"/>
    </include>

    <!-- 加载机器人模型描述参数 -->
    <param name="robot_description" command="$(find xacro)/xacro --inorder '$(find mbot_description)/urdf/xacro/gazebo/mbot_with_laser_gazebo.xacro'" /> 

    <!-- 运行joint_state_publisher节点,发布机器人的关节状态  -->
    <node name="joint_state_publisher" pkg="joint_state_publisher" type="joint_state_publisher" ></node> 

    <!-- 运行robot_state_publisher节点,发布tf  -->
    <node name="robot_state_publisher" pkg="robot_state_publisher" type="robot_state_publisher"  output="screen" >
        <param name="publish_frequency" type="double" value="50.0" />
    </node>

    <!-- 在gazebo中加载机器人模型-->
    <node name="urdf_spawner" pkg="gazebo_ros" type="spawn_model" respawn="false" output="screen"
          args="-urdf -model mrobot -param robot_description"/> 

</launch>

2, open terminal, run Lidar's launch file

1) into the ROS workspace

cd ~/ros

2) Run Lidar simulation

roslaunch mbot_gazebo view_mbot_with_laser_gazebo.launch

3), terminal run shot:
Here Insert Picture Description
4), Gazebo simulation software running Screenshot
Here Insert Picture Description

3, open Rviz, radar display

1), the new terminal, the working space into the ROS

cd ros

2) Run Rviz

rosrun rviz rviz

Here Insert Picture Description
3) Click add-> By the display type-> Image-> the OK
Here Insert Picture Description
. 4), the configuration parameters Laserscan
FIG like configuration:
drop-down selection Odom:
Here Insert Picture Description
Laserscan configuration is shown below, where the drop-down selection:
Here Insert Picture Description
5), shows the results of as follows:
Here Insert Picture Description
that's all for this blog friends, hope to read this blog can help better learning robot simulation ROS small partners Oh! Problems encountered little friends are welcome to comment area comments, see Lin Jun seniors will give you the answer, the seniors are not too cold!
Chen programming years and one day in January ^ _ ^

Published 60 original articles · won praise 67 · views 10000 +

Guess you like

Origin blog.csdn.net/qq_42451251/article/details/105151069
Recommended