手把手带你实现基于深度学习的垃圾分类器

随着PaddlePaddle2.0的更新,PaddleClas图像分类套件也更新到了2.0-rc1版本。新版本的PaddleClas套件已经默认使用动态图来进行模型训练。现在我们使用PaddleClas套件从零开始实现一个简单的垃圾分类器。来体验一下新版本的PaddleClas的的方便快捷,即使初学者也能快速的训练出高精度的模型。本篇文章分为上下两部分,上部讲解如何从零开始训练,下部讲解部分核心代码以及深度学习训练过程中使用到的技术。

1.准备数据集

数据集下载地址:
https://aistudio.baidu.com/aistudio/datasetdetail/64185
下载好数据集之后,首先需要解压压缩包。

mkdir dataset
cd dataset
unzip garbage_classify.zip

数据集中共包含43个分类,例如:9代表"厨余垃圾/水果果肉"、22代表"可回收物/旧衣服、39代表有害垃圾/过期药物"。具体类别可以查看garbage_classify中的garbage_classify_rule.json文件。

有了数据集之后,需要对数据集进行划分。在dataset目录下创建process_dataset.py文件,使用下列代码将数据集划分为训练集、验证集和测试集,划分比例为8:1:1。

import os
import glob
import numpy as np
file_list = glob.glob('./garbage_classify/train_data/*.txt')
np.random.shuffle(file_list)
train_len = len(file_list) // 10 * 8
val_len = len(file_list) // 10
train_list = []

for txt_file in file_list[:train_len]:
    with open(txt_file, 'r') as f:
        line = f.readlines()[0]

        line = line.strip()
        image_file,label = line.split(',')
        image_file = image_file.strip()
        label = label.strip()
        image_path = os.path.join('./garbage_classify/train_data/', image_file)
        train_list.append(image_path + ' ' + label + '\n')
with open('train_list.txt', 'w') as f:
    f.writelines(train_list)

val_list = []
for txt_file in file_list[train_len:train_len + val_len]:
    with open(txt_file, 'r') as f:
        line = f.readlines()[0]

        line = line.strip()
        image_file,label = line.split(',')
        image_file = image_file.strip()
        label = label.strip()
        image_path os.path.join('./garbage_classify/train_data/', image_file)
        val_list.append(image_path + ' ' + label + '\n')
with open('val_list.txt', 'w') as f:
    f.writelines(val_list)

test_list = []
for txt_file in file_list[train_len + val_len:]:
    with open(txt_file, 'r') as f:
        line = f.readlines()[0]

        line = line.strip()
        image_file,label = line.split(',')
        image_file = image_file.strip()
        label = label.strip()
        image_path = os.path.join('./garbage_classify/train_data/', image_file)
        test_list.append(image_path + ' ' + label + '\n')
with open('test_list.txt', 'w') as f:
    f.writelines(test_list)

以上代码运行结束后,目录结构如下:

├── garbage_classify
├── process_dataset.py
├── test_list.txt
├── train_list.txt
└── val_list.txt

2.下载PaddleClas套件

下载PaddleClas源代码,并切换到2.0-rc1版本。安装该套件依赖软件可参考以下文档:
https://github.com/PaddlePaddle/PaddleClas/blob/release/2.0-rc1/docs/en/tutorials/install_en.md

git clone https://github.com/PaddlePaddle/PaddleClas.git
git fetch
git branch release/2.0-rc1 origin/release/2.0-rc1

3.修改配置文件

PaddleClas套件中包含了多种神经网络模型,也包含了模型对应的训练参数,配置参数保存在configs路径下。本次的垃圾分类器我选择一个工业界常用的ResNet50网络作为分类器。首先通过拷贝的方式新建一个垃圾分类器的配置文件。

cd PaddleClas/configs/ResNet/
cp ResNet50_vd.yaml garbage_ResNet50_vd.yaml

然后修改garbage_ResNet50_vd.yaml内容如下:

mode: 'train'
ARCHITECTURE:
    name: 'ResNet50_vd'

pretrained_model: ""
model_save_dir: "./output/"
classes_num: 43
total_images: 1281167
save_interval: 1
validate: True
valid_interval: 1
epochs: 200
topk: 5
image_shape: [3, 224, 224]

use_mix: False
ls_epsilon: 0.1

LEARNING_RATE:
    function: 'Cosine'          
    params:                   
        lr: 0.001               

OPTIMIZER:
    function: 'Momentum'
    params:
        momentum: 0.9
    regularizer:
        function: 'L2'
        factor: 0.000070

TRAIN:
    batch_size: 256
    num_workers: 0
    #这里改成dataset的真实路径,推荐使用绝对路径
    file_list: "../dataset/train_list.txt" 
    data_dir: "../dataset/"
    shuffle_seed: 0
    transforms:
        - DecodeImage:
            to_rgb: True
            to_np: False
            channel_first: False
        - RandCropImage:
            size: 224
        - RandFlipImage:
            flip_code: 1
        - NormalizeImage:
            scale: 1./255.
            mean: [0.485, 0.456, 0.406]
            std: [0.229, 0.224, 0.225]
            order: ''
        - ToCHWImage:
    mix:                       
        - MixupOperator:    
            alpha: 0.2      

VALID:
    batch_size: 64
    num_workers: 0
    #这里改成dataset的真实路径,推荐使用绝对路径
    file_list: "../dataset/val_list.txt"
    data_dir: "../dataset/aistudio/"
    shuffle_seed: 0
    transforms:
        - DecodeImage:
            to_rgb: True
            to_np: False
            channel_first: False
        - ResizeImage:
            resize_short: 256
        - CropImage:
            size: 224
        - NormalizeImage:
            scale: 1.0/255.0
            mean: [0.485, 0.456, 0.406]
            std: [0.229, 0.224, 0.225]
            order: ''
        - ToCHWImage:

4.开始训练

为了加快模型的收敛,同时提升模型的精度,这里我选择先加载预训练模型,然后对模型进行微调。首先需要下载预训练权重。

wget https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet50_vd_pretrained.pdparams

然后开始训练模型:

python tools/train.py \
    -c configs/ResNet/garbage_ResNet50_vd.yaml \
    -o pretrained_model="ResNet50_vd_pretrained" \
    -o use_gpu=True

训练过程中输入日志如下:

W1214 20:29:28.872682  1473 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 70, Driver API Version: 10.1, Runtime API Version: 10.1
W1214 20:29:28.877846  1473 device_context.cc:346] device: 0, cuDNN Version: 7.6.
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1175: UserWarning: Skip loading for out.weight. out.weight receives a shape [2048, 1000], but the expected shape is [2048, 43].
  warnings.warn(("Skip loading for {}. ".format(key) + str(err)))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1175: UserWarning: Skip loading for out.bias. out.bias receives a shape [1000], but the expected shape is [43].
  warnings.warn(("Skip loading for {}. ".format(key) + str(err)))
2020-12-14 20:29:33 INFO: Finish initing model from ResNet50_vd_pretrained
2020-12-14 20:29:36 INFO: epoch:0  , train step:0   , loss: 3.78009, top1: 0.00781, top5: 0.11328, lr: 0.001000, batch_cost: 2.94361 s, reader_cost: 2.13878 s, ips: 86.96806 images/sec.
2020-12-14 20:30:01 INFO: epoch:0  , train step:10  , loss: 3.70998, top1: 0.06641, top5: 0.26953, lr: 0.001000, batch_cost: 2.42268 s, reader_cost: 1.62624 s, ips: 105.66822 images/sec.
2020-12-14 20:30:25 INFO: epoch:0  , train step:20  , loss: 3.62013, top1: 0.10938, top5: 0.35938, lr: 0.001000, batch_cost: 2.43433 s, reader_cost: 1.63609 s, ips: 105.16244 images/sec.
2020-12-14 20:30:50 INFO: epoch:0  , train step:30  , loss: 3.53434, top1: 0.21484, top5: 0.41406, lr: 0.001000, batch_cost: 2.46094 s, reader_cost: 1.66256 s, ips: 104.02520 images/sec.

5.模型评估

为了可以快速的看到效果,训练100个epoch之后,可以先停止训练。当前最优模型在验证集上的精度为top1: 0.90589, top5: 0.98966。

然后我们在测试集上评估一下最优模型的精度。

将PaddleClas/configs/ResNet/garbage_ResNet50_vd.yaml文件中验证集的路径改为测试集。

VALID:
    batch_size: 64
    num_workers: 0
    file_list: "/home/aistudio/test_list.txt"
    data_dir: "/home/aistudio/"

开始评估模型,

python tools/eval.py -c \
./configs/ResNet/garbage_ResNet50_vd.yaml -o \
pretrained_model="./output/ResNet50_vd/best_model/ppcls"

运行结果如下:

2020-12-15 09:08:25 INFO: epoch:0  , valid step:0   , loss: 1.05716, top1: 0.89062, top5: 1.00000, lr: 0.000000, batch_cost: 0.75766 s, reader_cost: 0.68446 s, ips: 84.47009 images/sec.
2020-12-15 09:08:31 INFO: epoch:0  , valid step:10  , loss: 0.89015, top1: 0.92188, top5: 1.00000, lr: 0.000000, batch_cost: 0.58153 s, reader_cost: 0.51459 s, ips: 110.05544 images/sec.
2020-12-15 09:08:36 INFO: epoch:0  , valid step:20  , loss: 0.91526, top1: 0.90625, top5: 1.00000, lr: 0.000000, batch_cost: 0.58075 s, reader_cost: 0.51361 s, ips: 110.20320 images/sec.
2020-12-15 09:08:42 INFO: epoch:0  , valid step:30  , loss: 0.83382, top1: 0.92857, top5: 1.00000, lr: 0.000000, batch_cost: 0.55392 s, reader_cost: 0.48895 s, ips: 25.27445 images/sec.
2020-12-15 09:08:42 INFO: END epoch:0   valid loss: 0.96556, top1: 0.90331, top5: 0.99018,  batch_cost: 0.55392 s, reader_cost: 0.48895 s, batch_cost_sum: 11.63230 s, ips: 25.27445 images/sec.

可以看出当前的最优模型在测试集上的精度为top1: 0.90331, top5: 0.99018。准确率可以达到90%,当然这个精度还是可以继续提升的。可以通过调参、更换模型和数据增强进一步提升模型精度。

下一篇会解析一下PaddleClas套件中的核心代码,以及一些调优的策略。

PaddleClas仓库地址:https://github.com/PaddlePaddle/PaddleClas

猜你喜欢

转载自blog.csdn.net/txyugood/article/details/111184297
今日推荐