视觉入门必备实战--pytorch--阿里天池大赛--街景字符--手把手指导

目录

前言:

用到的库:

1、数据准备

2、数据加载

3、创建Dataset类

pytorch --数据加载之 Dataset 与DataLoader详解

4、数据增强、创建DataLoader

5、搭建模型:

6、模型的训练

7、模型预测结果

8、成绩提交

前言:

目前阿里天池大赛正式赛已经结束了,还有一个长期赛同学们可以参加,增加自己的cv基础知识

天池大数据竞赛_天池大赛-阿里云天池

这里就是天池大赛的官方网址啦,想打比赛的小伙伴们可以点击上方的链接注册,里面有很多数据分析,视觉检测以及算法等等的比赛,同时还有很多入门比赛,大家都可以尝试学习

话不多说,正式开始:

备注:默认同学们已经配置好pytorch环境了哈,当然,遇到临时用到的库,再安装也行

基本的流程如下:


用到的库:

import os, sys, glob, shutil, json
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
import cv2

from PIL import Image
import numpy as np

from tqdm import tqdm, tqdm_notebook
# %pylab inline

import torch
torch.manual_seed(0)
torch.backends.cudnn.deterministic = False
torch.backends.cudnn.benchmark = True

1、数据准备

进入天池大赛官网,报名就可以获取 数据啦,在官网的csv文件中有数据集,验证集还有测试机的下载网址:

2、数据加载

直接上代码:

训练集的数据加载:

train_path = sorted(glob.glob('D:/1wangyong\pytorchtrains\街景字符\Data\mchar_train/*.png'))
train_json = json.load(open('D:/1wangyong\pytorchtrains\街景字符\Data\mchar_train.json'))

train_label = [train_json[x]['label'] for x in train_json]

测试集的数据加载和训练集一样:

val_path = sorted(glob.glob('D:/1wangyong\pytorchtrains\街景字符\Data\mchar_val/*.png'))
val_json = json.load(open('D:/1wangyong\pytorchtrains\街景字符\Data\mchar_val.json'))
val_label = [val_json[x]['label'] for x in val_json]
print(len(val_path), len(val_label))

很多文章都写路径最好别带中文哈,因为运行没出现什么问题,就没改,大家不放心的话,路径使用全英文的也可以

3、创建Dataset类

在pytorch中,数据加载完成之后,就要建立一个Dataset类,这个可以在我的博客:

pytorch --数据加载之 Dataset 与DataLoader详解

中查看详细的描述:

class SVHNDataset(Dataset):
    def __init__(self, img_path, img_label, transform=None):
        self.img_path = img_path
        self.img_label = img_label
        if transform is not None:
            self.transform = transform
        else:
            self.transform = None

    def __getitem__(self, index):
        img = Image.open(self.img_path[index]).convert('RGB')

        if self.transform is not None:
            img = self.transform(img)

        lbl = np.array(self.img_label[index], dtype=np.int_)
        lbl = list(lbl) + (5 - len(lbl)) * [10]
        return img, torch.from_numpy(np.array(lbl[:5]))

    def __len__(self):
        return len(self.img_path)


 4、数据增强、创建DataLoader

这个也是训练集与验证集分开:

测试集:

val_loader = torch.utils.data.DataLoader(
    SVHNDataset(val_path, val_label,
                transforms.Compose([
                    transforms.Resize((80, 160)),
                    transforms.RandomCrop((64, 128)),
                    # transforms.ColorJitter(0.3, 0.3, 0.2),
                    # transforms.RandomRotation(5),
                    transforms.ToTensor(),
                    transforms.Normalize([0.485, 0.456, 0.406], [
                                         0.229, 0.224, 0.225])
                ])),
    batch_size=64,
    shuffle=False,
    num_workers=0,
)

训练集:

train_loader = torch.utils.data.DataLoader(
    SVHNDataset(train_path, train_label,
                transforms.Compose([
                    transforms.Resize((80, 160)),
                    transforms.RandomCrop((64, 128)),
                    transforms.ColorJitter(0.3, 0.3, 0.2),
                    transforms.RandomRotation(10),
                    transforms.ToTensor(),
                    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
                ])),
    batch_size=64,
    shuffle=True,
    num_workers=0,
)

5、搭建模型:

官网模型:

class SVHN_Model1(nn.Module):
    def __init__(self):
        super(SVHN_Model1, self).__init__()

        model_conv = models.resnet18(pretrained=True)
        model_conv.avgpool = nn.AdaptiveAvgPool2d(1)
        model_conv = nn.Sequential(*list(model_conv.children())[:-1])  # 去除最后一个fc layer
        self.cnn = model_conv

        self.fc1 = nn.Linear(512, 11)
        self.fc2 = nn.Linear(512, 11)
        self.fc3 = nn.Linear(512, 11)
        self.fc4 = nn.Linear(512, 11)
        self.fc5 = nn.Linear(512, 11)

    def forward(self, img):        
        feat = self.cnn(img)
        #print(feat.shape)
        feat = feat.view(feat.shape[0], -1)
        c1 = self.fc1(feat)
        c2 = self.fc2(feat)
        c3 = self.fc3(feat)
        c4 = self.fc4(feat)
        c5 = self.fc5(feat)
        return c1, c2, c3, c4, c5

官网给出的模型比较基础,如果只用官网,那肯定没有太大意义:

所以针对网络做出以下改进:

我们可以对使用的backbone网络进行一系列的改进:

1、由resnet18换为更大的resnet152

2、为每一个分类模块加上一层全连接隐藏层

3、为隐含层添加dropout

4、给全连接隐藏层中途添加一个relu函数,增强非线性
由resnet18换为resnet152,更深的模型就拥有更好的表达能力,添加一层隐含层同样起到了增加模型拟合能力的作用,与此同时为隐含层添加dropout来进行一个balance,一定程度上防止过拟合。(只是一些改进技巧,并不最优)

改进后的模型定义代码如下:

class SVHN_Model2(nn.Module):
    def __init__(self):
        super(SVHN_Model2, self).__init__()

        # resnet18
        model_conv = models.resnet152(pretrained=True)
        model_conv.avgpool = nn.AdaptiveAvgPool2d(1)
        model_conv = nn.Sequential(*list(model_conv.children())[:-1])  # 去除最后一个fc layer
        self.cnn = model_conv

        self.hd_fc1 = nn.Linear(512, 256)
        self.hd_fc2 = nn.Linear(512, 256)
        self.hd_fc3 = nn.Linear(512, 256)
        self.hd_fc4 = nn.Linear(512, 256)
        self.hd_fc5 = nn.Linear(512, 256)
        self.dropout_1 = nn.Dropout(0.25)
        self.dropout_2 = nn.Dropout(0.25)
        self.dropout_3 = nn.Dropout(0.25)
        self.dropout_4 = nn.Dropout(0.25)
        self.dropout_5 = nn.Dropout(0.25)
        self.fc1 = nn.Linear(256, 11)
        self.fc2 = nn.Linear(256, 11)
        self.fc3 = nn.Linear(256, 11)
        self.fc4 = nn.Linear(256, 11)
        self.fc5 = nn.Linear(256, 11)

    def forward(self, img):
        feat = self.cnn(img)
        feat = feat.view(feat.shape[0], -1)

        feat1 = torch.relu(self.hd_fc1(feat))
        feat2 = torch.relu(self.hd_fc2(feat))
        feat3 = torch.relu(self.hd_fc3(feat))
        feat4 = torch.relu(self.hd_fc4(feat))
        feat5 = torch.relu(self.hd_fc5(feat))
        feat1 = self.dropout_1(feat1)
        feat2 = self.dropout_2(feat2)
        feat3 = self.dropout_3(feat3)
        feat4 = self.dropout_4(feat4)
        feat5 = self.dropout_5(feat5)

        c1 = self.fc1(feat1)
        c2 = self.fc2(feat2)
        c3 = self.fc3(feat3)
        c4 = self.fc4(feat4)
        c5 = self.fc5(feat5)

        return c1, c2, c3, c4, c5

6、模型的训练

基础的数据加载、数据增强、以及模型搭建都已经完成,就可以正式训练了:

有的同学可能好奇,那之前的Datase类和DataLoader过程是干嘛用的?

还是,同学们可以看看我之前的博客:pytorch --数据加载之 Dataset 与DataLoader详解

这里面有详细的介绍

训练代码:

model = SVHN_Model2()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), 0.001)
best_loss = 1000.0

use_cuda = True
if use_cuda:
    model = model.cuda()

for epoch in range(100):
    start = time.time()
    print('start', time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(start)))
    train_loss = train(train_loader, model, criterion, optimizer, epoch)
    val_loss = validate(val_loader, model, criterion)
    val_label = [''.join(map(str, x)) for x in val_loader.dataset.img_label]
    val_predict_label = predict(val_loader, model, 1)
    val_predict_label = np.vstack([
        val_predict_label[:, :11].argmax(1),
        val_predict_label[:, 11:22].argmax(1),
        val_predict_label[:, 22:33].argmax(1),
        val_predict_label[:, 33:44].argmax(1),
        val_predict_label[:, 44:55].argmax(1),
    ]).T
    val_label_pred = []
    for x in val_predict_label:
        val_label_pred.append(''.join(map(str, x[x != 10])))

    val_char_acc = np.mean(np.array(val_label_pred) == np.array(val_label))
    end = time.time()
    print('end', time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(end)))
    time_cost = end - start
    print(
        'Epoch: {0}, Train loss: {1} \t Val loss: {2}, time_cost: {3}'.format(
            epoch,
            train_loss,
            val_loss,
            time_cost))
    print('Val Acc', val_char_acc)
    # 记录下验证集精度
    if val_loss < best_loss:
        best_loss = val_loss
        # print('Find better model in Epoch {0}, saving model.'.format(epoch))
        torch.save(model.state_dict(), './model.pt')

我在训练代码中加了一些,开始时间以及结束时间的记录,看一下网络迭代一次额所需时间。不需要的同学可以直接注释掉就OK

如果选用的优化器是Adam的话,大概20轮之内就训练完成了,SGD的话,需要训练久一点:

上述代码使用的Adam

7、模型预测结果

在上述的过程中,我们已经完成了模型的训练了,并且保存了训练最好的模型,我们只需要把模型拿出来做测试就好了:

代码如下:

model = SVHN_Model1().cuda()
test_path = sorted(glob.glob('D:/1wangyong\pytorchtrains\街景字符\Data\mchar_test_a/*.png'))
# test_json = json.load(open('../input/test.json'))
test_label = [[1]] * len(test_path)
# print(len(test_path), len(test_label))

test_loader = torch.utils.data.DataLoader(
    SVHNDataset(test_path, test_label,
                transforms.Compose([
                    transforms.Resize((68, 136)),
                    transforms.RandomCrop((64, 128)),
                    # transforms.ColorJitter(0.3, 0.3, 0.2),
                    # transforms.RandomRotation(5),
                    transforms.ToTensor(),
                    transforms.Normalize([0.485, 0.456, 0.406], [
                                         0.229, 0.224, 0.225])
                ])),
    batch_size=40,
    shuffle=False,
    num_workers=0,
)

# 加载保存的最优模型
model.load_state_dict(torch.load('D:/Projects/wordec/model.pt'))

test_predict_label = predict(test_loader, model, 1)
print(test_predict_label.shape)
print('test_predict_label', test_predict_label)

test_label = [''.join(map(str, x)) for x in test_loader.dataset.img_label]
# print('test_label', test_label)
test_predict_label = np.vstack([
    test_predict_label[:, :11].argmax(1),
    test_predict_label[:, 11:22].argmax(1),
    test_predict_label[:, 22:33].argmax(1),
    test_predict_label[:, 33:44].argmax(1),
    test_predict_label[:, 44:55].argmax(1),
]).T

test_label_pred = []
for x in test_predict_label:
    test_label_pred.append(''.join(map(str, x[x != 10])))
# print("test_label_pred", len(test_label_pred))
df_submit = pd.read_csv('D:/Projects/wordec/input/test_A_sample_submit.csv')
df_submit['file_code'] = test_label_pred
df_submit.to_csv('submit_1018.csv', index=None)
print("finished")

完成上述过程,同学们就已经完成了,一个基础的训练啦。是不是有点小激动呢?

8、成绩提交

进入刚刚的天池大赛官网,找到相关的比赛,提交结果就可以啦!!

备注:(结果就是第七步保存的文件)

快去查看自己的排名吧!!! 

猜你喜欢

转载自blog.csdn.net/weixin_53374931/article/details/130100125
今日推荐