图像分类竞赛baseline——以智能硬件语音控制的时频图分类挑战赛为例

AIstudio链接:https://aistudio.baidu.com/aistudio/projectdetail/4415367

一. 赛事介绍

赛事链接:https://challenge.xfyun.cn/topic/info?type=time-frequency-2022&option=tjjg

本次竞赛是一次典型的图像分类竞赛,通过训练已知的图片及其对应标签,预测得到未知图片的种类。

官方提供的数据集为以下格式:

在这里插入图片描述

  • test:存放将要预测的测试集图片;
  • train:存放训练集图片;
  • train.csv:存放训练集图片的路径及其对应的类别;
  • 提交示例:存放测试集图片的路径及其预测得到的类别。

下面我基于百度飞桨深度学习框架,提供了三种模型训练、预测、生成结果文件的全流程baseline,该baseline亦可用于所有图像分类竞赛,只需要针对性地修改Dataset中图片和标签的读取方式即可。

二. Baseline

导入所需库:

import os
import cv2
import time
import numpy as np
import pandas as pd
from PIL import Image
import paddle
from paddle.io import Dataset, DataLoader
from paddle.regularizer import L2Decay
import paddle.nn as nn
from paddle.vision import transforms
from paddle.vision import models
from paddle.metric import Accuracy

环境设置:

device  = paddle.device.get_device()
#输出cuda说明使用gpu,输出cpu说明使用cpu,最好使用gpu训练
paddle.device.set_device(device)
print(device)

基础数据读取:

扫描二维码关注公众号,回复: 14774382 查看本文章
train_df = pd.read_csv('data/train.csv')
train_df['path'] = 'data/train/' + train_df['image']
train_df.sample(frac=1).reset_index(drop=True) #打乱训练集

test_df = pd.read_csv('data/提交示例.csv')
test_df['path'] = 'data/test/' + test_df['image'] 

定义数据集读取方法:

class XunFeiDataset(Dataset):
    def __init__(self, img_path, label, transforms=None):        
        self.img_path = img_path        
        self.label = label        
        if transforms is not None:           
            self.transforms = transforms        
        else:            
            self.transforms = None        
    def __getitem__(self, index):        
        img = img = Image.open(self.img_path[index]).convert('RGB')  
        img = self.transforms(img)                                    
        
        return img,self.label[index]        
    
    def __len__(self):        
        return len(self.img_path)

2.1 基础API训练

2.1.1 训练数据的分割、特征工程及封装

此部分主要包括三个步骤:

  • 分割训练数据,其中1943张用于训练,200张用于验证;
  • 对训练集和验证集做数据预处理,对训练集做特征工程;
  • 生成训练集和验证集的Dataloader,用于训练时批量数据读取。
#使用torch批量数据读取
train_loader = DataLoader(    
    XunFeiDataset(train_df['path'].values[:-200], train_df['label'].values[:-200],           
            transforms.Compose([
                transforms.Resize(256), 
                transforms.RandomCrop(224),
                transforms.ToTensor(),
                transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))
            ])
        ), batch_size=4, shuffle=True, num_workers=0)
    
val_loader = DataLoader(    
    XunFeiDataset(train_df['path'].values[-200:], train_df['label'].values[-200:],            
            transforms.Compose([
                transforms.Resize(224),
                transforms.ToTensor(),
                transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))])
        ), batch_size=2, shuffle=False, num_workers=0) 

2.1.2 定义模型、损失函数和优化函数

model = models.resnet18(pretrained=True,num_classes=24)
criterion = nn.CrossEntropyLoss()
optimizer = paddle.optimizer.Adam(learning_rate=0.0001, parameters=model.parameters(),weight_decay=0.1)

2.1.3 模型训练

    #----------------模型训练----------------
       
# 模型训练一个epoch的函数
def train(train_loader, model, criterion, optimizer):   
    model.train()  
    train_acc = 0.0  
    train_loss = 0.0    
    
    for i, (input, target) in enumerate(train_loader):               
        output = model(input)        
        loss = criterion(output, target)        
        optimizer.clear_grad()        
        loss.backward()        
        optimizer.step()
        if i % 40 == 0:            
            print(loss.item())
        train_acc += (output.argmax(1) == target).sum().item()                             
        train_loss += loss.item()        
    
    return train_loss/len(train_loader),train_acc / len(train_loader.dataset)
    
# 模型验证一个epoch的函数
def validate(val_loader, model, criterion):    
    model.eval()
    val_acc = 0.0
    val_loss = 0.0        
    
    with paddle.no_grad():        
        end = time.time()        
        for i, (input, target) in enumerate(val_loader):                        
            output = model(input)            
            val_loss += criterion(output, target).item()
            val_acc += (output.argmax(1) == target).sum().item()                   
        return val_loss/len(val_loader),val_acc / len(val_loader.dataset)            

epochs = 40
for i  in range(epochs):
    print("------epoch{}----------".format(i))   
    print("-------Loss----------")      
    train_loss,train_acc = train(train_loader, model, criterion, optimizer) 
    print("loss={}".format(train_loss))  
    print("acc:{}".format(train_acc)) 

    print("-------Val acc----------") 
    val_loss,val_acc = validate(val_loader, model, criterion)  
    print("loss={}".format(val_loss))  
    print("acc:{}".format(val_acc))     

训练时输出训练集及验证集的loss和准确率:

------epoch1----------
-------Loss----------
1.901474952697754
1.421966552734375
1.5676559209823608
1.6595444679260254
0.4624171853065491
1.3034279346466064
0.7691125273704529
0.9662804007530212
1.1717147827148438
0.5677876472473145
1.6588013172149658
1.4357925653457642
1.852024793624878
loss=1.076483992668091
acc:0.6711271230056614
-------Val acc----------
loss=0.5212626896231086
acc:0.835

2.1.4 模型预测及结果输出

模型预测:

#----------------模型预测----------------
test_loader = DataLoader(    
    XunFeiDataset(test_df['path'].values, [0] * test_df.shape[0],            
        transforms.Compose([
                transforms.Resize((224, 224)),        
                transforms.ToTensor(),
                transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))])
            ), batch_size=2, shuffle=False, num_workers=0)

model.eval()    
val_acc = 0.0        

test_pred = []    
with paddle.no_grad():  
    for input, _ in test_loader():
        # print(img[0])                      
        output = model(input)           
        test_pred.append(output.cpu().numpy())                

pred = np.vstack(test_pred)

结果输出:

#----------------结果输出----------------
pd.DataFrame(    
    {
    
            
        'image': [x.split('/')[-1] for x in test_df['path'].values],        
        'label': pred.argmax(1)
        }).to_csv('result.csv', index=None)

运行后生成名为result.csv的文件,提交该文件至平台即可。

2.2 K折交叉验证

2.2.1 训练数据的分割、特征工程及封装

# K折交叉验证
def get_k_fold_data(k, i, X, y):
    assert k > 1
    fold_size = X.shape[0] // k
    X_train, y_train = None, None
    for j in range(k):
        idx = slice(j * fold_size, (j + 1) * fold_size)
        X_part, y_part = X[idx], y[idx]
        if j == i:
            X_valid, y_valid = X_part, y_part
        elif X_train is None:
            X_train, y_train = X_part, y_part
        else:
            X_train = X_train.append(X_part)
            y_train = y_train.append(y_part)
    train_loader = DataLoader(XunFeiDataset(X_train.values, y_train.values,           
                                    transforms.Compose([
                                        transforms.Resize(224), #256
                                        #transforms.RandomHorizontalFlip(),
                                        #transforms.RandomCrop(224), 
                                        transforms.ToTensor(),
                                        transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))])
                            ), batch_size=4, shuffle=True, num_workers=0)
    val_loader = DataLoader(XunFeiDataset(X_valid.values, y_valid.values,            
                                transforms.Compose([
                                    transforms.Resize(224),
                                    transforms.ToTensor(),
                                    transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))])
                            ), batch_size=2, shuffle=False, num_workers=0) 
    return train_loader, val_loader

此部分主要包括三个步骤:

  • 将训练数据分成k份,取其中第i份为验证集,其余为训练集;
  • 对训练集和验证集做数据预处理,对训练集做特征工程;
  • 生成训练集和验证集的Dataloader,用于训练时批量数据读取。

函数get_k_fold_data(k, i, X, y)的输入参数分别表示:

  • k:将训练数据分成k份;
  • i:设置其中第i份为验证集,其余为训练集;
  • x:训练数据的图片地址,为Series数据类型;
  • y:训练数据的标签,为Series数据类型。

函数的输出为训练集和验证集的Dataloader

2.2.2 模型训练

train_loss = []
val_loss = []       
#----------------模型训练----------------
       
# 模型训练一个epoch的函数
def train(train_loader, model, criterion, optimizer):   
    model.train()  
    train_acc = 0.0  
    train_loss = 0.0    
    
    for i, (input, target) in enumerate(train_loader):               
        output = model(input)        
        loss = criterion(output, target)        
        optimizer.clear_grad()        
        loss.backward()        
        optimizer.step()
        if i % 40 == 0:            
            print(loss.item())
        train_acc += (output.argmax(1) == target).sum().item()                             
        train_loss += loss.item()        
    
    return train_loss/len(train_loader.dataset),train_acc / len(train_loader.dataset)
    
# 模型验证一个epoch的函数
def validate(val_loader, model, criterion):    
    model.eval()
    val_acc = 0.0
    val_loss = 0.0        
    
    with paddle.no_grad():        
        end = time.time()        
        for i, (input, target) in enumerate(val_loader):                        
            output = model(input)            
            val_loss += criterion(output, target).item()
            val_acc += (output.argmax(1) == target).sum().item()                   
        return val_loss/len(val_loader.dataset),val_acc / len(val_loader.dataset)    

# 模型预测函数     
def predict(test_loader, model, criterion):    
    model.eval()    
    val_acc = 0.0        

    test_pred = []    
    with paddle.no_grad():        
        end = time.time()        
        for i, (input, target) in enumerate(test_loader):                
            output = model(input)            
            test_pred.append(output.cpu().numpy())                
        return np.vstack(test_pred)
        
test_loader = DataLoader(    
    XunFeiDataset(test_df['path'].values, [0] * test_df.shape[0],            
        transforms.Compose([
                transforms.Resize((224, 224)),        
                transforms.ToTensor(),
                transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))])
            ), batch_size=2, shuffle=False, num_workers=0)

result = pd.DataFrame()
# 模型训练
k_fold = 5
for i in range(k_fold):
    train_loss_mean = 0.0
    train_acc_mean = 0.0
    val_loss_mean = 0.0
    val_acc_mean = 0.0

    train_loader,val_loader = get_k_fold_data(k_fold, i, train_df['path'], train_df['label'])
    model = models.resnet18(pretrained=True,num_classes=24)
	criterion = nn.CrossEntropyLoss()
	optimizer = paddle.optimizer.Adam(learning_rate=0.0001, parameters=model.parameters(),weight_decay=0.1)
	
    print("---------第{}折开始----------".format(i+1))
    epochs = 50
    for epoch in range(epochs):
        print("------epoch{}----------".format(epoch))   
        print("-------Loss----------")      
        train_loss,train_acc = train(train_loader, model, criterion, optimizer) 
        train_loss_mean += train_loss/epochs
        train_acc_mean += train_acc/epochs
        print("loss={}".format(train_loss))  
        print("acc:{}".format(train_acc)) 

        print("-------Val acc----------") 
        val_loss,val_acc = validate(val_loader, model, criterion) 
        val_loss_mean += val_loss/epochs
        val_acc_mean += val_acc/epochs 
        print("loss={}".format(val_loss))  
        print("acc:{}".format(val_acc)) 
    print("第{}折交叉验证train_loss={},train_acc={},val_loss={},val_acc={}".format(i+1,train_loss_mean,train_acc_mean,val_loss_mean,val_acc_mean))
    
#----------------模型测试----------------
#1. result为每一折测试集预测得到五组种类组合成的Dataframe
#2. pred为每一折测试集得到的原始结果相加
    test_pred = predict(test_loader, model, criterion)
    result[str(i)] = test_pred.argmax(1)
    if i==0:
        pred = test_pred
    else:
        pred += test_pred

训练时输出训练集及验证集的loss和准确率:

---------1折开始----------
------epoch0----------
-------Loss----------
4.114025115966797
4.572966575622559
3.054084539413452
2.3948230743408203
2.816720962524414
2.7093520164489746
2.7494349479675293
2.8162894248962402
2.26254940032959
2.2256574630737305
1.4336745738983154
loss=0.6817054544862743
acc:0.205607476635514
-------Val acc----------
loss=1.3423477758992917
acc:0.34345794392523366

2.2.3 模型预测及结果输出

在每一折训练结束之后都对测试集进行了一次预测,我这里提供了两种方式:

  • result:为每一折测试集预测得到k组种类组合成的Dataframe。比如测试集有1020张图片,k为5折,则result为大小为(1020,5),其中每一列为每一折模型预测得到1020张图片对应的种类,最终对每一张图片进行投票,选取预测最多的种类作为该图片的类。
  • pred:为每一折测试集得到的原始结果相加,大小为(1020,24)的Array数组,其中每一行为每一张图片对应24个种类的分数,选取分数最高的作为该图片的类。
#----------------结果输出----------------
#保存五次预测投票后的结果
label = np.array(result.mode(1)[0],dtype=int)
pd.DataFrame(    
    {
    
            
        'image': [x.split('/')[-1] for x in test_df['path'].values],        
        'label': label 
        }).to_csv('result.csv', index=None)
        
#保存五次预测的原始结果(1020*24)求和后每一个样本的最大分数对应的类
pd.DataFrame(    
    {
    
            
        'image': [x.split('/')[-1] for x in test_df['path'].values],        
        'label': pred.argmax(1)
        }).to_csv('result2.csv', index=None)

这两种预测方式得到的结果大部分是相同的,有可能有小部分图片的结果有所差异。

2.3 高层API训练

关于高层API参考:官方文档

2.3.1 训练数据的分割、特征工程及封装

train_dataset =  XunFeiDataset(train_df['path'].values[:-200], train_df['label'].values[:-200],           
            transforms.Compose([
                transforms.Resize(224), #256
                #transforms.RandomHorizontalFlip(),
                #transforms.RandomCrop(224), 
                transforms.ToTensor(),
                transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))])
            )
val_dataset = XunFeiDataset(train_df['path'].values[-200:], train_df['label'].values[-200:],            
            transforms.Compose([
                transforms.Resize(224),
                transforms.ToTensor(),
                transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))])
            )

2.3.2 定义模型、损失函数和优化函数

# 模型定义这个跟前面有点不一样
model = paddle.Model(models.resnet18(pretrained=True,num_classes=24))
criterion = nn.CrossEntropyLoss()
optimizer = paddle.optimizer.Adam(learning_rate=0.0001, parameters=model.parameters(),weight_decay=0.1)

# 进行训练前准备
model.prepare(optimizer, criterion , Accuracy(topk=(1, 5))) #Accuracy(topk=(1, 5))表示输出预测得到Top1和Top5的准确率

2.3.3 模型训练

# 启动训练,更多输入参考官方文档
model.fit(train_dataset,
          val_dataset,
          epochs=50,
          batch_size=4,
          save_dir="./output", #模型保存路径
          num_workers=0)

每一轮训练后的模型参数保存在"./output"文件夹下。

运行后输出以下内容,

step  10/486 - loss: 3.3036 - acc_top1: 0.0250 - acc_top5: 0.2000 - 323ms/step
step  20/486 - loss: 3.6178 - acc_top1: 0.0250 - acc_top5: 0.2125 - 239ms/step
step  30/486 - loss: 3.7559 - acc_top1: 0.0500 - acc_top5: 0.2667 - 213ms/step
step  40/486 - loss: 3.0506 - acc_top1: 0.0688 - acc_top5: 0.2562 - 198ms/step
step  50/486 - loss: 3.1231 - acc_top1: 0.0850 - acc_top5: 0.2900 - 189ms/step
step  60/486 - loss: 3.6662 - acc_top1: 0.0792 - acc_top5: 0.2750 - 183ms/step
step  70/486 - loss: 3.2053 - acc_top1: 0.0750 - acc_top5: 0.2750 - 178ms/step
step  80/486 - loss: 1.9316 - acc_top1: 0.0906 - acc_top5: 0.3125 - 175ms/step
step  90/486 - loss: 3.4731 - acc_top1: 0.0889 - acc_top5: 0.3222 - 173ms/step
step 100/486 - loss: 3.4838 - acc_top1: 0.0925 - acc_top5: 0.3300 - 171ms/step
step 110/486 - loss: 3.5857 - acc_top1: 0.0886 - acc_top5: 0.3318 - 170ms/step

2.3.4 模型评估

#模型评估
model.load("./output/48") #加载模型参数
model.evaluate(val_dataset, verbose=1)

运行后输出以下内容:

Eval begin...
step 200/200 [==============================] - loss: 0.0074 - acc_top1: 0.9400 - acc_top5: 0.9950 - 9ms/step           
Eval samples: 200

2.3.5 模型预测及结果输出

# 模型预测
test_dataset = XunFeiDataset(test_df['path'].values, [0] * test_df.shape[0],            
        transforms.Compose([
                transforms.Resize((224, 224)),        
                transforms.ToTensor(),
                transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))])
            )
test_result = model.predict(test_dataset,verbose=1)
label = []
for i in range(len(test_result[0])):
    label.append(test_result[0][i].argmax())
pd.DataFrame(    
    {
    
            
        'image': [x.split('/')[-1] for x in test_df['path'].values],        
        'label': label
        }).to_csv('result.csv', index=None)

三. 最后

本文提供了图像分类竞赛的三种baseline,对于不同的数据集仅需要修改Dataset中图片和标签的读取方式即可,大家可以在此基础上通过修改特征工程、模型、学习率等操作提升分数,如果大家有好的分数提高的方法麻烦评论或者私信我一起学习分享,谢谢大家!

猜你喜欢

转载自blog.csdn.net/cyj972628089/article/details/126241935