[3D Detection Series-PointRCNN] Reproduce the PointRCNN code and realize PointRCNN 3D target detection visualization, including pre-training weight download links (starting from 0 and various error solutions)

[3D Detection Series-PointRCNN] Reproduce PointRCNN code

1. Download the code

2. Prepare the data set

(1) Use the data set format provided by the official website

(2) Use soft connections

3.Test results

4. Results visualization

(1)Only display LiDAR

(2) Display LiDAR and images

 (3) Display LiDAR and images with specific indexes

(4) Display LiDAR with modified LiDAR file additional point cloud labels/markers as 5th dimension


First attach the environment configuration:

Ubuntu 18.04

python3.6

pytorch 1.8.0    torchvision 0.9.0   cuda 11.1

(Don’t rush to install these, there will be tutorials later) mayavi 4.7.1 vkt 8.2.0 traits 6.2.0 traitsui 7.2.1 PyQt5 5.15.2


1. Download the code

https://github.com/sshaoshuai/PointRCNN

 The Pytorch version of the code can be downloaded directly from github. This step should not require too much explanation.

If you don't know how to download it, you can directly open a terminal and enter the code.

git clone https://github.com/sshaoshuai/PointRCNN

! ! ! ! Notice! ! ! !

! ! ! The code is incomplete after downloading! ! !

Otherwise, the following error will be reported: No module named 'iou3d_cuda'

    When pointnet2_lib is opened, it will be empty. At this time, you need to open this folder separately in github, download the contents, and then put it in the local code folder. Then you also need to run the following code to install some tools:

sh build_and_install.sh

Error 1:

error: command 'gcc' failed with exit status 1

Solution: Enter the directory: ~/pointnet2_lib/pointnet2/src/ ----->Change the THCState_getCurrensStream(state) of all cpp files in the file to c10::cuda::getCurrentCUDAStream()

Error 2:

 

Solution: Enter the following directory: ~/lib/utils/roipool3d/src/roipool3d.cpp ----->Change AT_CHECK in the file to TORCH_CHECK


2. Prepare the data set

(1) Use the data set format provided by the official website

First, you need to download the KITTI data set. If you go to the official website, you don’t need to try it because you can’t download it at all. There is a big guy who directly uploads it to Baidu cloud disk. You can download it from him. KITTI data set download (Baidu Cloud) (The author is not easy, you can give others a like and support!!)

Then the data set structure of the official website is as follows:

 There may be some newbies who don’t understand it well (I try to explain it as clearly as possible, please forgive me, you can skip it by yourself) , you can refer to mine as follows:

(2) Use soft connections

    Because before running PointRCNN, I ran PointPIllars first, so in order to avoid making a new copy, I can use soft connections directly. Connect the data set in PointPillars directly here.

In the data/KITTI folder of PointRCNN:

ln -s (PointPillars数据集的路径) object

    object represents the name of the created folder. It is best to use this name, otherwise the code will have to be modified. And the path of the data set is the root directory containing training and testing. Then an object file will be generated and it will be OK.

3.Test results

You can use the author's pre-trained model for direct detection and put the model under tools. His model cannot be downloaded from the external network. I uploaded it to CSDN:---- PointRCNN pre-training weights -----

Next, start testing:

python eval_rcnn.py --cfg_file cfgs/default.yaml --ckpt PointRCNN.pth --batch_size 4 --eval_mode rcnn --set RPN.LOC_XZ_FINE false

Error 1:

TypeError: load() missing 1 requered positional argument : ‘Loader’

Solution:

pip install pyyaml==5.1

 here we go! ! ! ! Start waiting now! !

 After waiting about 10 minutes:

 The detection results are placed in the following path:

PointRCNN/output/rcnn/default/eval/epoch_no_number/val/final_result/

4. Results visualization

Clone visualization tools:

git clone https://github.com/kuixu/kitti_object_vis.git

     After cloning, a soft link must be set in the data file as in 2.(2) above. You can delete his object and create it again. Then you need some dependencies: (Remember to use a mirror, otherwise it will be too slow!!!)

pip install opencv-python pillow scipy matplotlib pyside2

    Then you need to use conda to install mayavi. I can't use pip and I don't know why... But if you do use pip here, it basically won't work... I don't know why.

conda install mayavi

Then open the terminal in the kitti_object_vis file:

(1)Only display LiDAR

python kitti_object.py --show_lidar_with_depth --img_fov --const_box --vis

  The following errors may occur at this time:

Error 1: ModuleNotFoundError: No module named 'vtkIOParallelPython'

Solution:

conda install jsoncpp=1.8.3
pip install pyface==7.3.0

Then continue our code and the following interface will appear: (Done!!) Press Enter once on the terminal to view the next picture.

Then there are several other different display methods as follows: (For details, please view the source code on github   kitti_object_vis )

(2) Display LiDAR and images

python kitti_object.py --show_lidar_with_depth --img_fov --const_box --vis --show_image_with_boxes

 (3) Display LiDAR and images with specific indexes

python kitti_object.py --show_lidar_with_depth --img_fov --const_box --vis --show_image_with_boxes --ind 1 

(4) Display modified LiDAR fileLiDAR with additional point cloud labels/markers as 5th dimension

python kitti_object.py --show_lidar_with_depth --img_fov --const_box --vis --pc_label


I wrote it myself so it's a bit complicated, but at least it's done hehe. If you have any optimizations, please discuss in the comment area! !

You're done! It’s not easy to write. If you succeed, please follow or like. Thank you~~


Guess you like

Origin blog.csdn.net/Callme_TeacherPi/article/details/125963061