[Deeplabv3+] Use pytorch to reproduce Deeplabv3+ in Ubutu18.04 (step 2)-----KITTI data set prediction

Before viewing this article, please first view the blogger's previous article [Deeplabv3+] Using pytorch to reproduce Deeplabv3+ in Ubutu18.04 (the first step) -----Environment configuration_Da Fengtian's blog - CSDN blog

After configuring the environment, proceed to the next step

Table of contents

1. Source code, data set and pre-training download

(1) Source code download

(2)KITTI data set download

(3) Download pre-training weights

 2. Forecast

1. Single picture prediction

2. Whole data set prediction


1. Source code, data set and pre-training download

(1) Source code download

Source code location: https://github.com/VainF/DeepLabV3Plus-Pytorch

Click: Code > Download ZIP to download.

 After the download is completed, unzip it to the Ubuntu desktop ( other locations are also acceptable ) and get the project folder DeepLabV3Plus-Pytorch-master.

(2)KITTI data set download

Part of the original data set of KITTI: https://github.com/ErenBalatkan/Bts-PyTorch/blob/master/kitti_archives_to_download.txt

This article selects one of the above data sets for testing. The link is as follows: https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_29_drive_0004/2011_09_29_drive_0004_sync.zip

 Download the 2011_09_29_drive_0004 data set, unzip it, and place the image data set under the project folder DeepLabV3Plus-Pytorch-master/dataset/data:

Note: The data set is generally placed under the project folder. You can define this location yourself, or you can create a new folder under DeepLabV3Plus-Pytorch-master to store it.

Image_00 and image_01 in the data set are black and white camera pictures;

image_02, image_03 are color camera pictures;

oxts is the saved imu and gnss data;

velodyne_points is the point cloud data of velodyne lidar.

This article only uses image_02 for testing

train_aug.txt is included in the project folder DeepLabV3Plus-Pytorch-master/datasets/data and does not need to be controlled.

(3) Download pre-training weights

Download address (just choose any one):

https://www.dropbox.com/sh/w3z9z8lqpi8b2w7/AAB0vkl4F5vy6HdIhmRCTKHSa?dl=0

or

https://share.weiyun.com/qqx78Pv5

This article only uses: best_deeplabv3plus_mobilenet_cityscapes_os16.pth. Others can be downloaded and tested by yourself .

After the download is completed, create a new checkpoints folder in the project folder DeepLabV3Plus-Pytorch-master , and place the best_deeplabv3plus_mobilenet_cityscapes_os16.pth file under this folder.

 2. Forecast

1. Single picture prediction

# 激活上一篇文章创建好的虚拟环境中
conda activate deeplabv3+
# 切换到项目文件夹下面
cd  DeepLabV3Plus-Pytorch-master
# 运行预测代码
python3 predict.py --input ~/Desktop/DeepLabV3Plus-Pytorch-master/datasets/data/image_02/data/0000000001.png --dataset cityscapes --model deeplabv3plus_mobilenet --ckpt checkpoints/best_deeplabv3plus_mobilenet_cityscapes_os16.pth --save_val_results_to test_result

 # ~/Desktop/DeepLabV3Plus-Pytorch-master/datasets/data/image_02/data/0000000001.png为单张预测图片的路径,根据自己的数据位置选择
 # best_deeplabv3plus_mobilenet_cityscapes_os16.pth为预训练权重路径
 # 预测结果保存在test_results文件夹下(代码运行过程中会自己创建test_results文件夹)

How many seconds does it take to run the single image prediction code? The prediction results can be viewed in DeepLabV3Plus-Pytorch-master/test_results:

2. Whole data set prediction

Just change the image path in the previous step to the entire folder (the speed depends on your graphics card, the graphics card in this article is 2060, the whole process takes more than 20 seconds):

The result is as shown below:

 Because it is my own thesis topic, I will continue to reproduce the data set training and testing process. If you have any questions about the reproduction process, please leave a comment in the comment area. Bloggers are always available to help.

Guess you like

Origin blog.csdn.net/m0_62648611/article/details/129443631