yolov4/yolov4-tiny nanny-level training and teaching

Table of contents

1. pytorch environment construction

1. Create a new environment

2. Activate the environment

3. Download according to version

2. Installation of labelimg

3. Data processing part

         1. rename data file

2. Data enhancement

 4. Yolov4 training process

5. Rent GPU


1. pytorch environment construction

On the premise of installing anaconda

In the terminal of the compiler pycharm

1. Create a new environment

cuda create -n pytorch1.6_cuda10.2 python=3.7

//Create the compilation environment of python3.7pytorch1.6

2. Activate the environment

conda activate pytorch1.6_cuda10.2   

3. Download according to version

conda install pytorch==1.6.0 torchvision==0.7.0 cpuonly -c pytorch

//Only install the cpu version of pytorch1.6

(I’m not sure if you can install a GPU in this place. Try using the CPU first to see the training effect)

The file in txt contains the environment needed now

Set a requirements.txt

pip install -r requirements.txt -i  ttps://pypi.doubanio.com/simple/

There is a problem when calling some of the remaining functions or function names.

Directly use pycharm to set some of the available packages directly pip

Or pip install  scipy  -i  Simple Index

Change the thing in red to the function name such as yaml

2. Installation of labelimg

1.win+R and enter cmd to open the terminal

conda activate pytorch1.6_cuda10.2 (activate environment)

2. Direct pip install labelimg

3. Data processing part

1. rename data file

Make sure your file is in jpg or png format. If there is any inconsistency, it cannot be called in the python file.

Data processing part

Correct file path

Before the file is jpg, modify the '.jpg' after if     

For JPG files, j=2 is modified and continued after the previous file.

① When labeling, you can use an ordered sequence of images without labeling the pictures, but you must use labelimg to pass all pictures, otherwise the images and annotation will appear at voc_label and cannot correspond.

②Data enhancement can be simply processed through opencv or rotated

Place the folders according to images and annation and follow the yolo steps again.

2. Data enhancement

● Embed the data enhancement module into the model
● Perform data enhancement in the Dataset dataset

Or you can customize functions to enhance the data

Such as: adjusting image saturation

#visualize(image, saturated)
image = tf.expand_dims(images[3]*255, 0)
saturated = tf.image.adjust_saturation(image, 3)
plt.figure(figsize=(8, 8))
for i in range(9):
    augmented_image = aug_img(saturated)
    ax = plt.subplot(3, 3, i + 1)
    plt.imshow(augmented_image[0].numpy().astype("uint8"))

    plt.axis("off")

Crop images, etc.

image = tf.expand_dims(images[3]*255, 0)
cropped = tf.image.central_crop(image, central_fraction=0.5)

plt.figure(figsize=(8, 8))
for i in range(9):
    augmented_image = aug_img(cropped)
    ax = plt.subplot(3, 3, i + 1)
    plt.imshow(augmented_image[0].numpy().astype("uint8"))

    plt.axis("off")

 4. Yolov4 training process

1.data in the data

Change the name and data of the class in the file

For example:

6 types of fruits

Apple

Mango

Banana

nongfushanquan

toothbrush

wanglaoji

 

2. Run the kmeans file in the file

 

The above problem indicates that the size of the labels is uniform and the k clustering algorithm cannot be used to adjust the k value to 2 or 3 (normally 6)

After running, open the kmeans.txt file

copy result

 

3. Open yolo-tiny in the cfg file

Shortcut key ctrl+f to search yolo

anchors = 84,112, 273,273,84,112, 273,273,84,112, 273,273

classes=6

Filter    of the previous layer (number of categories + 5 ) *3    a priori box
filters=33

There are two places in yolo that need to be changed

4. Run makeTxt.py

Separate training and validation sets

5.voc_label.py

Among them, modify classes as data content

And correct the file path in list_file.write

Run the voc_label.py file

6. You can start the training program

 

Correct the default content for the current data set and training requirements

Note that you can use /// to change the path or +r in front of the path //

train.py  
parser.add_argument('--cfg', type=str, default='cfg/csdarknet53s-BIFPN-spp-CA-GTSDB.cfg', help='model.yaml path')

parser.add_argument('--data', type=str, default='data/GTSDB.yaml', help='data.yaml path')

5. Rent GPU

If you don’t have the training conditions, you can rent an online GPU. The tutorial is as follows:

  1. Baidu search gpushare
  2. Create pytorch environment

​​​​​​​

3. Click File Site Manager to log in

4. Transfer the compressed file to the hy-tmp folder

Put the requirements file in first and save it later.

5.

Open jupyterlab terminal

create environment

create create -n py python=3.7

conda activate py

conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=10.2 -c pytorch

cd /hy-tmp #1s ##Open the folder

pip install -r requirements.txt installs the required files

Remember to add pyyaml ​​and scipy inside

Unzip: unzip 1.zip

Open the folder: cd tea #1s

Run the code python train.py 

Guess you like

Origin blog.csdn.net/m0_58585940/article/details/128545556