Reproduction of Efficientdet + training your own data set

Efficientnet has shown extremely powerful detection results in industrial inspection. It is on the same competitive track as yolov3 and yolov5. It has been the dominant version for more than 6 months. Other improved models based on efficientnet have also shown applicability in industrial inspection. The framework of the official source code is based on tensortflow, and the accuracy of reproduction is far less than the accuracy mentioned in the paper. zylo117's improved pytorch code is easier to reproduce and easier to learn.

复现源码:Yet-Another-EfficientDet-Pytorch/tutorial/train_logo.ipynb at master · zylo117/Yet-Another-EfficientDet-Pytorch · GitHubhttps://github.com/zylo117/Yet-Another-EfficientDet-Pytorch/blob/master/tutorial/train_logo.ipynb

1. Create an environment

Still according to the previous suggestion, when creating a new project project, the adaptability of the environment is extremely important . Using the environment of other projects to install new requirements can easily cause the original project to reinstall the environment. In order to avoid environmental adaptability problems, it is best for one project to correspond to one environment.

1. Open Anaconda Prompt

conda create -n efficient python==3.7

2. Activate the environment

conda activate efficient

If successful, the environment has been created.

2. Download the source code

1. Click on the GitHub website above.

 2. After downloading, unzip it and open it in engineering mode with pycharm.

3. Configuration environment

According to the reproduced source code requirements, the packages we need to install are: pycocotools numpy opencv-python tqdm tensorboard tensorboardX pyyaml ​​webcolors, torch 1.4.0 and torchvison 0.5.0.

1. First, select the environment you just downloaded in pycharm. For how to select the environment in pycharm, please refer to the author's other yolov8 reproduction.

2. Click Terminal, and the leftmost brackets display efficient (the environment created above). It is recommended to turn off the ladder or other agents when configuring the environment.

(1)

pip install pycocotools numpy opencv-python tqdm tensorboard tensorboardX pyyaml webcolors

(2) One of the most common mistakes in the above code is that the installation of pycocotools fails because pycocotools no longer provides Windows support.

Solution:

pip install pycocotools-windows

 

This step indicates that the installation has been successful.

(3) According to the code flow, the next step is to download torch and torchvision. If you follow the steps of the source code directly on windows, you may download errors.

It is recommended to download according to the official website of pytorch, URL: PyTorch https://pytorch.org/

After the download is complete, enter in the Terminal of pycharm

python

import torch
torch.cuda.is_available()

若返回Ture则可为GPU版本

 In the process of configuring the environment, you may also encounter a situation, first install pycocotools numpy opencv-python tqdm tensorboard tensorboardX pyyaml ​​webcolors. It may be possible to download torch and torchvision later, but it can be downloaded but not available, and import  torch fails.

Solution: First go to the pytorch official website to download torch and torchvison. Then download the above package.

So far, the environment configuration has been completed. (Next, when you think everything is going well, you may encounter cv version incompatibility or errors. If the above environment does not work, you can privately chat with the author and send an adaptation environment package, which can be placed directly under the envs of anaconda, although I don’t often read it)

4. Preliminary verification of whether the model can be tested

1. Download weight

Choose a D0 to test: a melon rind is missing, and the effect is not bad.

 

Choose D8 test: the missed mingugu was detected, and there is also the water glass next to it

 

2. Partial code explanation

The weight loaded here uses the parameter compound_coef, indicating that this is optional.

0-8 means download weight selection

 

So far the model has been verified to be operational.

 5. Use coco data set training (or create your own data set)

(1) Prepare the data set, there are many ways to download it, the official website, some private Baidu cloud sharing, you can search in csdn, and other data sets are also available.

The format of the coco data set:

# for example, coco2017
datasets/
    -coco2017/
        -train2017/
            -000000000001.jpg
            -000000000002.jpg
            -000000000003.jpg
        -val2017/
            -000000000004.jpg
            -000000000005.jpg
            -000000000006.jpg
        -annotations
            -instances_train2017.json
            -instances_val2017.json

I prepared the coco2014 dataset.

 

(2) Create a parameter configuration file

Create a coco.yml (the name of your data set.yml) and place it under the projects file. The source code download already has coco, logo, shape.

(3) In fact, there is no need to train the coco dataset from scratch, which will take a huge time cost (several months). suggestion

The way is to train a custom dataset using pretrained weights. Even part of the backbone network can be frozen, and the concept of transfer learning can be supplemented elsewhere.

Where the training needs to be modified:

The parameter configuration of train.py , or it can be added directly to the code later.

Modify here to the absolute path of your own dataset 

efficientdet/dataset.py , modify the folder name

 

a. Retraining code:

python train.py -c 0 --batch_size 64 --optim sgd --lr 8e-2

(The parameters inside may not be applicable, especially the default num_workers=10. If the disk space is not enough, there will be a sentence that the page is too small. It is recommended to decide according to your computer hardware configuration).

The training effect is as follows:

 The code function also includes stopping and continuing training from the last task:

Stop training and save the current training progress

Ctrl+c

resume training

python train.py -c 2 -p your_project_name --batch_size 8 --lr 1e-3 \
 --load_weights last \
 --head_only True

b. Transfer learning

python train.py -c 2 -p your_project_name --batch_size 8 --lr 1e-3 --num_epochs 10 
 --load_weights /path/to/your/weights/efficientdet-d2.pth

The above is to simply reproduce and train the data set format you want. Efficientdet also supports the VOC format. The yml file should be matched and the dataset can be replaced with a VOC data loading file. For more details, please study by yourself.

 

Guess you like

Origin blog.csdn.net/chenhaogu/article/details/131186164