Installation of darknet_ros and target detection on PX4 UAV simulation platform
Reference materials:
https://github.com/leggedrobotics/darknet_ros
https://gitee.com/robin_shaun/XTDrone
https://www.yuque.com/xtdrone/manual_cn/target_detection_tracking
https://blog.csdn.net/ qq_42145185/article/details/105730256
Xiao Kun’s XTDrone project is used in the construction of the drone simulation platform and target recognition and tracking. Thank you very much. The related webpage: https://gitee.com/robin_shaun/XTDrone. I am just installing darknet_ros and no one in the simulation. The use on the machine is a summary.
If there is any infringement, please contact me.
One, install ROS
Please download from the official website: http://wiki.ros.org/Installation/Ubuntu when installing, pay attention to the correspondence between ROS and Ubuntu version, if you have a requirement for the gazebo version, please execute it when installing (take the 18.04 corresponding melodilc as an example) sudo When the apt-get install ros-melodic-desktop-full command is changed to sudo apt-get install ros-melodic-desktop and then install gazebo
Create a workspace in the home directory after the ros installation is complete
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws
catkin_make
source devel/setup.bash
Two, install OpenCV and boost
OpenCV download official website: https://opencv.org/releases/
boost download official website: https://www.boost.org/
OpenCV 3.2 is already installed during ROS installation, you can choose not to install it, if you want to install it, please install 3.3 For the following versions, if the version is too high, errors may occur when compiling darknet_ros. This problem will also be mentioned later.
Three, install usb_cam
If you want to use usb camera as input, you need to use ROS usb_cam, whose website is https://github.com/bosch-ros-pkg/usb_cam. Installation or not does not affect the use of subsequent UAV simulation target recognition and tracking
Download usb_cam and configure the environment
cd ~/catkin_ws/src
git clone https://github.com/bosch-ros-pkg/usb_cam.git
cd ..
catkin_make_isolated
source ~/catkin_ws/devel/setup.bash
Compile usb_cam
cd src/usb_cam
mkdir build
cd build
cmake ..
make
Test the usb camera
1. Open the launch file and modify the file according to the camera number. Generally, the camera of the notebook itself is video0, and the usb camera is video1.
cd ~/catkin_ws/src/usb_cam/launch
gedit launch
2. Open a new terminal and run roscore
roscore
3. Run the test
cd ~/catkin_ws/src/usb_cam/launch
roslaunch usb_cam usb_cam-test.launch
If the image is displayed successfully, the operation is correct. You can use ctrl+c to interrupt the program.
Errors may occur during runtime, you need to add a line of code
source /home/youruser/catkin_ws/devel/setup.bash
cd ~/catkin_ws/src/usb_cam/launch
roslaunch usb_cam usb_cam-test.launch
For convenience, you don’t need to set the environment variable every time, you can modify it directly in barshc
sudo gedit ~/.bashrc
Add at the bottom of barshc
source ~/catkin_ws/devel/setup.bash
export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:~/catkin_ws/
Restart barshc
source ~/.bashrc
echo $ROS_PACKAGE_PATH #若显示路径,则表示设置成功
Four, darknet_ros download and install
Darknet_ros source webpage: https://github.com/leggedrobotics/darknet_ros
Please set up SSH before gitclone, otherwise the source code cannot be downloaded from github
cd catkin_ws/src
git clone --recursive [email protected]:leggedrobotics/darknet_ros.git
cd ../
If such an error occurs, it means that the SSH key has not been set. The SSH key setting tutorial:
https://blog.csdn.net/qq_45067735/article/details/108027310
Because the download is too slow, I provide the downloaded darknet_ros compressed file here After the download is complete, unzip and extract it in catkin_ws/src: https://download.csdn.net/download/qq_45067735/12713492
Compile darkne_ros
catkin_make -DCMAKE_BUILD_TYPE=Release
This error may occur during compilation.
This is the version of OpenCV. I installed OpenCV 4.4 here. There was such an error. After installing Opencv 3.2, there was no such error. This is the reason why I suggested to install OpenCV 3.3.
Run darknet_ros to
execute darknet_ros for detection. Before running the detection, you need to change the configuration file to make the topics subscribed by darknet_ros correspond to the topic of pictures published by usb_cam.
Open the catkin_ws/src/darknet_ros/darknet_ros/config/ros.yaml file and modify:
subscribers:
camera_reading:
topic: /camera/rgb/image_raw
queue_size: 1
Change to
subscribers:
camera_reading:
topic: /usb_cam/image_raw
queue_size: 1
Set writing environment variables
cd ~/catkin_ws
source devel/setup.bash
Enable YOLO
roslaunch darknet_ros darknet_ros.launch
At the same time, open another terminal to enable usb_cam
roslaunch usb_cam usb_cam-test.launch
There may be such an error when running:
This problem has troubled me for a long time. I also consulted some people. They gave me some ideas and solutions. Some were successful and some were unsuccessful. Considering that everyone's reasons may be different , I still list them all here:
1. The problem of weight files. Download the weight file from the official website to solve it.
2. The problem of darknet_ros code download. Download from gitee and run it to solve it.
3. The problem of ROS. I tried the first two methods and found that they didn't solve it. Later I found that it might be a ROS problem. This problem occurred when I downloaded and run ROS from the domestic installation source at the time. Later, after downloading it from the official website, there was no problem.
After running successfully, the screen will appear:
Although the display is YOLO V3, it is actually YOLOv2-tiny. There are some problems with the test results. We need to change the pre-training set to YOLO v3 to test. Replace as follows: Find the config file and see To some data sets, we need to modify the launch file to enable YOLOv3
Open the launch file and
modify darknet_ros.launch
arg name="network_param_file" default="$(find darknet_ros)/config/yolov2-tiny.yaml"/
改为
arg name="network_param_file" default="$(find darknet_ros)/config/yolov3.yaml"/
As shown:
restart YOLO v3
roslaunch darknet_ros darknet_ros.launch
If the NVIDIA graphics card is not used, the fps is relatively low at about 0.1. In order to achieve real-time you need to modify the darknet makefile. Before that, please install the NVIDIA graphics driver and CUDA and CUDNN.
Find the Makefile in /catkin_ws/src/darknet_ros/darknet.
Make modifications according to your needs:
GPU=1 Use CUDA and GPU (CUDA default path is /usr/local/cuda)
CUDNN=1 Use CUDNN v5-v7 acceleration network (CUDNN default path /usr/local/cudnn)
OPENCV=1 Use OpenCV 4.x/3.x/2.4.x, run detection video and camera
OPENMP=1 Use OpenMP to use multiple CPUs to accelerate
DEBUG=1 Compile the debug version
After completing the modification, you need to compile in the workspace:
cd ~/catkin_ws
catkin_make
Then start darknet_ros, you can see that the fps has been improved a lot
Five, the basic construction of the drone simulation platform
You can refer to my previous blog: https://blog.csdn.net/qq_45067735/article/details/107303796
6. Target detection and tracking of UAV simulation platform
Compile darknet_ros again
cp -r XTDrone/sensing/object_detection_and_tracking/YOLO/* ~/catkin_ws/src/
cd ~/catkin_ws
catkin_make
Enable YOLO
source devel/setup.bash
roslaunch darknet_ros px4_tracking.launch
At this point, first load the network parameters, and then wait for the image to arrive,
and then start the PX4 outdoor scene simulation. At this time, YOLO receives the image and starts target detection
cd ~/PX4_Firmware
roslaunch px4 outdoor1.launch
You will see water at first because part of the camera is submerged in the ground, and the ocean below the ground. After loading this scene, drag the scene to the right. The red arrow refers to the initial position of the drone.
Establish communication
cd ~/XTDrone/communication
python multirotor_communication.py typhoon_h480 0
Control the drone to take off
cd ~/XTDrone/control
python multirotor_keyboard_control.py typhoon_h480 1 vel
Enable PTZ control
cd ~/XTDrone/sensing/gimbal
python run.py typhoon_h480 0
You can choose to wait in place for pedestrians to come by, or you can actively control the aircraft to find pedestrians. After the target appears, close multirotor_keyboard_control.py (otherwise the instructions of the two programs will conflict), and then start (note the sys.path.append('/home/robin/catkin_ws/devel/lib/python2.7 /dist-packages') The path should be modified accordingly)
cd ~/XTDrone/control
python yolo_human_tracking.py typhoon_h480 0
Then the plane will automatically track the detected pedestrians.
Sometimes the texture of the lawn will not be displayed, but it does not affect the simulation, but it is not beautiful.
Reference materials:
https://github.com/leggedrobotics/darknet_ros
https://gitee.com/robin_shaun/XTDrone
https://www.yuque.com/xtdrone/manual_cn/target_detection_tracking
https://blog.csdn.net/ qq_42145185/article/details/105730256