I originally planned to use the darknet_ros package to detect yolov3 on the smart car equipped with Ubuntu 18.04 , but encountered many problems during the operation. The operation from beginning to end includes the solutions encountered and the corresponding articles. , so as not to look everywhere.
The first is to implement darknet_ros (YOLO V3) detection under ROS
Reference article
https://blog.csdn.net/qq_42145185/article/details/105730256
1. Code download
Code Github homepage: https://github.com/leggedrobotics/darknet_ros
Download command:
mkdir -p catkin_workspace/src
cd catkin_workspace/src
git clone --recursive [email protected]:leggedrobotics/darknet_ros.git
cd ../
The download time may be relatively long, please wait patiently...
If the downloaded Ubuntu system does not support directly crawling code from gitthub, you can switch the download method to download. Click clone in GitHub, and you can choose ssh and https to load in the drop-down mode. At this time, you only need to convert the git clone code.
via ssh
git clone --recursive [email protected]:mirrors/leggedrobotics/darknet_ros.git
via https
git clone --recursive https://gitcode.net/mirrors/leggedrobotics/darknet_ros.git
For other problems you may encounter when downloading, please refer to the chapter of downloading the darknet_ros package in this article
http://t.csdn.cn/hPQyd
Among them, because it is moved to the car equipped with Ubuntu, it is downloaded directly through win11, and then moved to the Ubuntu system inside the car through a USB flash drive. During this moving process, files may be missing or damaged, which may cause errors in later operation, so it is not recommended. It is best to download directly through the above method, not by downloading various compressed packages such as GitHub's zip.
For example,
start yolo3 to detect objects seen by the camera
roslaunch darknet_ros darknet_ros.launch
Error reporting
This error is most likely because some cfg files cannot be recognized after moving this package from windows to ubuntu.
Go back to the /home/xx/catkin_ws/src/darknet_ros directory, open the terminal, execute
git stash
and run again.
2. Compile
In the ROS workspace directory, execute the command:
catkin_make -DCMAKE_BUILD_TYPE=Release
At this point, the entire project will start to be compiled. After the compilation is complete, it will check whether there are two model files, yolov2-tiny.weights and yolov3.weights, in the {catkin_ws}/darknet_ros/darknet_ros/yolo_network_config/weights file. The volume is without these two model files. Therefore, it will automatically start downloading the model file after compiling, which is another long waiting time.
If you happen to have downloaded the model file before, that’s fine. Just copy the model file to the above folder before starting to compile, and you won’t download it again.
Download link:
https://pjreddie.com/media/files/yolov2.weights
https://pjreddie.com/media/files/yolov2-tiny.weights
https://pjreddie.com/media/files/yolov3.weights
https://pjreddie.com/media/files/yolov3-tiny.weights
Put the downloaded file into the /darknet_ros/darknet_ros/yolo_network_config/weights file
Continue
catkin_make -DCMAKE_BUILD_TYPE=Release
3. Image topic publishing
Because darknet_ros will directly subscribe to the specified image topic name, then detect the image, draw the detection frame, and publish the corresponding detection topic, so first you need to find a ROS package that can publish image topics. ROS is recommended here The official usb_cam driver package can directly publish images captured by the computer's built-in camera or a USB camera connected to the computer as a ROS image topic.
Download the camera driver:
sudo apt-get install ros-kinetic-usb-cam
Download the camera driver. Sometimes your own operating system may not be the kinetic version, so you need to query your own ros version through terminal commands.
rosversion -d
If it is a melodic version use:
sudo apt-get install ros-kinetic-usb-cam
Then publish the webcam image topic:
roslaunch usb_cam usb_cam-test.launch
If it goes well, you should be able to see the actual image display interface.
3. Start the camera test node
Error 1
After entering the following test start command, an error occurs.
Solution
1. Try to plug the camera interface into USB3.0.
USB3.0, USB3.0 is also called SuperSpeed USB bus. Compared with High Speed (High Speed) USB2.0 bus, its transmission speed is faster.
2. Secondly, as shown in the figure below, it must be checked, otherwise it means that the camera is not connected.
Error 2
solution:
sudo apt-get install ros-melodic-image-view
error 3
Set the usb device and check ubs3.
Then run darknet_ros
and then execute darknet_ros to detect. Before running the detection, you need to change the configuration file so that the topics subscribed by darknet_ros correspond to the topics of pictures published by usb_cam.
Open the darknet_ros/config/ros.yaml file and find:
Modify the topic in camera_reading as shown in the figure above, namely:
subscribers:
camera_reading:
topic: /usb_cam/image_raw
queue_size: 1
Then go back to the root directory of the darknet workspace and execute:
source devel/setup.bash
roslaunch darknet_ros darknet_ros.launch
We only need to replace the pre-training set with YOLO v3 for detection. The replacement is as follows: find the config file and you can see the following training set.
Open the launch file
and modify darknet_ros.launch
arg name="network_param_file" default="$(find darknet_ros)/config/yolov2-tiny.yaml"/
改为
arg name="network_param_file" default="$(find darknet_ros)/config/yolov3.yaml"/
如下:
Then reboot:
roslaunch darknet_ros darknet_ros.launch
It can be achieved almost, and basically the above errors are the solutions found from the following reference articles.
Implementation of darknet_ros (YOLO V3) detection under ROS
https://blog.csdn.net/qq_42145185/article/details/105730256
Car yolo robotic arm (1) Gazebo under ros to build a car (keyboard control) install camera simulation and load yolo to detect and identify marked objects
https://blog.csdn.net/WhiffeYF/article/details/109187804
ROS learning (3) call notebook and usb external camera
https://blog.csdn.net/m0_56451176/article/details/126174060?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522169020102516800227494294%2522%252C%2522scm%2522%253A%252220140713.130102334..%2522%257D&request_id=169020102516800227494294&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~sobaiduend~default-2-126174060-null-null.142^v91^insert_down1,239^v3^control&utm_term=sudo%20apt-get%20install%20ros-melodic-usb-cam&spm=1018.2226.3001.4187
Use camera under ROS
https://blog.csdn.net/wilylcyu/article/details/51732710
Turn on the camera in ROS_detailed steps
https://blog.csdn.net/weixin_41074793/article/details/83474501?utm_medium=distribute.pc_relevant.none-task-blog-baidujs_utm_term-0&spm=1001.2101.3001.4242
ROS study notes - connect USB camera in ROS
https://blog.csdn.net/weixin_51244852/article/details/116169460?ops_request_misc=&request_id=&biz_id=102&utm_term=sudo%20apt-get%20install%20ros-melod&utm_medium=distribute.pc_search_result.none-task-blog-2~all~sobaiduweb~default-3-116169460.142^v91^insert_down1,239^v3^control&spm=1018.2226.3001.4187
Use the usb_cam driver to read camera data under ROS
https://blog.csdn.net/Yangxiaoaijiejie/article/details/127061479?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522169020121616800227474354%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fall.%2522%257D&request_id=169020121616800227474354&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~first_rank_ecpm_v1~rank_v31_ecpm-2-127061479-null-null.142^v91^insert_down1,239^v3^control&utm_term=%E5%AE%89%E8%A3%85ROS_USB%E9%A9%B1%E5%8A%A8%E6%8A%A5%E9%94%99&spm=1018.2226.3001.4187
Here is an explanation for an error report using yolov3, because there are different launch files in the darknet_ros.launch file, if you use the following command.
roslaunch darknet_ros yolov3.launch
An error may be reported that the yolov3-tiny.cfg file is missing. If this file cannot be found, you need to manually create a new .cfg file and paste it.
This file link has Chinese comments and may report an error. The following is the compiled code for your own reference.
https://blog.csdn.net/weixin_44152895/article/details/106570976?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522169027884416800222821030%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fall.%2522%257D&request_id=169027884416800222821030&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~first_rank_ecpm_v1~rank_v31_ecpm-26-106570976-null-null.142^v91^insert_down1,239^v3^control&utm_term=yolov3-tiny.cfg%E4%B8%8B%E8%BD%BD&spm=1018.2226.3001.4187
[net]
# Testing
batch=1
subdivisions=1
# Training
# batch=64
# subdivisions=2
width=416
height=416
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1
learning_rate=0.001
burn_in=1000
max_batches = 500200
policy=steps
steps=400000,450000
scales=.1,.1
[convolutional]
batch_normalize=1
filters=16
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=2
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[maxpool]
size=2
stride=1
[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky
###########
[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky
[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky
[convolutional]
size=1
stride=1
pad=1
filters=255
activation=linear
[yolo]
mask = 3,4,5
anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
classes=80
num=6
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1
[route]
layers = -4
[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky
[upsample]
stride=2
[route]
layers = -1, 8
[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky
[convolutional]
size=1
stride=1
pad=1
filters=255
activation=linear
[yolo]
mask = 0,1,2
anchors = 10,14, 23,27, 37,58, 81,82, 135,169, 344,319
classes=80
num=6
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1