【2023-Pytorch-检测教程】手把手教你使用YOLOV5做麦穗计数

小麦是世界上种植地域最广、面积最大及产量最多的粮食作物,2021年世界小麦使用量达到7.54亿吨。小麦产量的及时预估对作物生产、粮食价格及粮食安全产生重大影响,单位面积穗数是小麦产量预估研究中的难点及重中之重。当前,人工估产方法依据专家目测估计产量,准确率得不到保证。取样估产方法通过采集部分区域,进行人工计数、称重,费时费力。随着计算机视觉技术的发展,大量研究致力于统计单幅图像中麦穗数进而实现估产,此类研究利用卷积神经网络强大的特征自学习能力,对麦穗进行特征提取,通过大量数据训练模型,进而成功实现对图像中麦穗计数,为后续小麦估产提供数据参考。然而部分现有的麦穗计数研究基于通用的原始计数网络,未考虑小麦尺度不一、密集等特点进行优化,准确率有待提升。

本期我们将深度学习算法YOLOV5和农业进行结合,通过目标检测的方式来统计一片区域中的麦穗数量。废话不多说,先看效果。

image-20230322214806157

val_batch1_prev

PR_curve_wheat

注意事项

  1. 尽量使用英文路径,避免中文路径,中文路径可能会导致代码安装错误和图片读取错误。
  2. pycharm运行代码一定要注意左下角是否在虚拟环境中。
  3. 库的版本很重要,使用本教程提供的代码将会事半功倍

前期准备

电脑的基础设置以及软件的安装这边不再做赘述,前期的准备工作参考下面博客的内容。

【2023-Pytorch-检测教程】手把手教你使用YOLOV5做电线绝缘子缺陷检测_肆十二的博客-CSDN博客

环境配置

OK,来到关键环境配置的部分,首先大家下载代码之后会得到一个压缩包,在当前文件夹解压之后,进入CMD开始我们的环境配置环节,大家这里应该是yolov5-wheat,我这边用的之前的图,项目名称上有一些差别,大家注意在你麦穗计数的文件中打开即可。

image-20230315150456715

为了加快后期第三方库的安装速度,我们这里需要添加几个国内的源进来,直接复制粘贴下面的这些指令到你的命令行即可。

conda config --remove-key channels
conda config --add channels https://mirrors.ustc.edu.cn/anaconda/pkgs/main/
conda config --add channels https://mirrors.ustc.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.bfsu.edu.cn/anaconda/cloud/pytorch/
conda config --set show_channel_urls yes
pip config set global.index-url https://mirrors.ustc.edu.cn/pypi/web/simple

执行完毕大概是下面这个样子,后面你就可以飞速下载这些库了。

image-20230315150835331

Create a virtual environment

First of all, we need to create a virtual environment according to our project, create and activate the virtual environment through the following instructions.

We create a virtual environment with Python version 3.8.5 and environment name yolo.

conda create -n yolo python==3.8.5
conda activate yolo

image-20230318200231348

Remember! Be sure to activate your virtual environment here, otherwise your library will be installed in the basic environment later, and the parentheses in front indicate the virtual environment you are in.

Pytorch installation

Note that Pyotorch is different from other libraries. The installation of Pytorch involves conda and cudnn. Generally speaking, for 30 series graphics cards, our cuda cannot be less than 11. For 10 and 20 series graphics cards, cuda10.2 is generally used. . The installation instructions for 30-series graphics cards, graphics cards below 30-series and cpu are given below, please download according to your computer configuration. The author here is a 3060 graphics card, so the first instruction is executed.

conda install pytorch==1.10.0 torchvision torchaudio cudatoolkit=11.3 # 30系列以上显卡gpu版本pytorch安装指令
conda install pytorch==1.8.0 torchvision torchaudio cudatoolkit=10.2 # 10系和20系以及mx系列的执行这条
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cpuonly # CPU的小伙伴直接执行这条命令即可

After installation, just like the author, you can enter the following command to test whether the following gpu is available. If the output is true, it means that the GPU is available.

image-20230315152204221

Rest of the libraries installed

The installation of the rest of the library is very simple. We install it through pip. Note that you must ensure that there is a requirements.txt file in the directory you execute, otherwise you will encounter a bug that the file cannot be found. You can pass the dircommand to see if this file exists.

pip install -r requirements.txt

image-20230318200435082

Run in Pycharm

One is for the convenience of viewing the code, and the other is for the convenience of operation. Here we use Pycharm to open the project. It is very convenient to click here to right-click the folder to open the project directly.

After opening, you will see such an interface, where the file browser is on the left, the editor is in the middle, some tools are below, and the virtual environment you are in is in the lower right corner.

image-20230318200610541

After that, we need to select a virtual environment for the current project. This step is very important. Some brothers have configured it without selecting an environment, and you will encounter a bunch of strange bugs. The steps for selecting an environment are as follows.

Click first to add an interpreter.

image-20230315153552702

Three steps to choose the virtual environment we just created, click ok.

image-20230315153636384

image-20230315153728665

After that, you can right-click and execute the main_window.py file, and the following screen will appear, indicating that you are successful.

image-20230315153838054

Dataset preparation

Here I put the data set in CSDN, you can perform labeling to prepare the data set, or use the data set I processed here, after downloading the data set, put it in the wheat_yolo_format directory at the same level as the code directory.

image-20230322214906698

After the dataset is opened, you will see two folders, the images directory stores image files, and the labels directory stores label files.

image-20230322215024502

Then remember the path of your data set here, we will use it later in the training, such as the author here F:/new_project/02mai/wheat_yolo_format.

train and test

Note: Here you can choose to try the following yourself. The author has put the trained model in the train directory of runs, and you can use it directly.

The following is the training process. The author has written the configuration files of the data set and model here. You only need to replace the data path in the data set with your path and execute it to start training go_train.py.

image-20230322215255658

Execute the go_train.py file, which contains three instructions, which respectively represent the small model, medium model and large model in yolov5. For example, if I want to train the s model here, I just comment out the instructions for the other two model training.

image-20230315155028259

After running, the running information will be output below. The red here is just log information, not an error report. Don't panic.

image-20230315155258835

Taking the s-model here as an example, the detailed meaning is as follows.

image-20230315155441956

If you want to test, use go_test.py, where the three lines represent the test instructions of the s-model, m-model and l-model respectively.

image-20230322215456308

The test results are as follows:

image-20230322215345896

Graphical program

The last is to execute our graphical interface program.

image-20230322215558210

Just right-click and execute window_main.py to execute. Here are the renderings of the two chapters.

image-20230322215519120

Guess you like

Origin blog.csdn.net/ECHOSON/article/details/129721592