PaddlePaddle|CV疫情特辑(二):手势识别

PaddlePaddle|CV疫情特辑(二):手势识别

本节内容来自:百度AIstudio课程
做一个记录。

本节内容主要是搭建一个分类网络,对手势进行分类,在分类之前,首先看一下数据特征:
在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述
在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述在这里插入图片描述
从左到右,从上到下,按顺序排列分别表示0-9,其实仔细看这个数据集是存在一定难度的,每张图片的光照、角度都是不一致的。但比较有趣的好像都是右手。
本次程序均在本地调试,但还是尽量按照网上的步骤讲解。

1.首先是引入包

# ResNet模型代码
import numpy as np
import paddle
import paddle.fluid as fluid
from paddle.fluid.layer_helper import LayerHelper
from paddle.fluid.dygraph.nn import Conv2D, Pool2D, BatchNorm, Linear
from paddle.fluid.dygraph.base import to_variable

import os
import time
import random
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import paddle
import paddle.fluid as fluid
import paddle.fluid.layers as layers
from multiprocessing import cpu_count
# from paddle.fluid.dygraph import Pool2D,Conv2D
# from paddle.fluid.dygraph import Linear

2. 引入Resnet网络

关于Resnet不做过多介绍,这里是引用官方实现
但是官方只有Resnet-50以上的Resnet,没有Resnet-18,Resnet-18层数更小,更方便测试,如何修改是很简单,首先参考Resnet网络结构:
在这里插入图片描述
所以只需要添加一段:depth = [2, 2, 2, 2]即可。
由于未使用预训练模型,同时在初始化时使用Xavier初始化,即param_attr = fluid.initializer.Xavier(uniform=False))

3. 生成训练/测试列表

# 生成图像列表
data_path = '/home/aistudio/data/data23668/Dataset'
character_folders = os.listdir(data_path)
# print(character_folders)
if(os.path.exists('./train_data.list')):
    os.remove('./train_data.list')
if(os.path.exists('./test_data.list')):
    os.remove('./test_data.list')
    
for character_folder in character_folders:
    
    with open('./train_data.list', 'a') as f_train:
        with open('./test_data.list', 'a') as f_test:
            if character_folder == '.DS_Store':
                continue
            character_imgs = os.listdir(os.path.join(data_path,character_folder))
            count = 0 
            for img in character_imgs:
                if img =='.DS_Store':
                    continue
                if count%10 == 0:
                    f_test.write(os.path.join(data_path,character_folder,img) + '\t' + character_folder + '\n')
                else:
                    f_train.write(os.path.join(data_path,character_folder,img) + '\t' + character_folder + '\n')
                count +=1
print('列表已生成')

这段主要是生成训练/测试的数据路径,查看train_data.list 即可知道:
在这里插入图片描述
但是发现数据没有随机分布,这样是不利于训练,所以打乱数据:shuffle_list('./train_data.list')

def shuffle_list(readFile, writeFile):
    with open(readFile, 'r') as f:
            lines = f.readlines()
            from random import shuffle
            shuffle(lines) #打乱列表
    with open(writeFile, 'w') as f:
            # print(lines) 
            for data in lines:
                f.write(str(data))
            f.close()

在这里插入图片描述

4.定义训练集和测试集的reader

和官网给的例程不同,这里我使用了数据增强:1.随机旋转。2.随机对称。通过数据增强的方式,来提高训练量,减少过拟合。当然图像会除255,是为了归一化数据到0-1之间。(但我好像忘了用。。)

# 定义训练集和测试集的reader
def data_mapper_train(sample ,enhance=True):
    img, label = sample
    img = Image.open(img)
    img = img.resize((100, 100), Image.ANTIALIAS)
    if enhance == True:
        # 数据增强
        # 随机逆时针旋转的角度
        angle = random.randint(0,15)
        f = random.randint(0,1)
        if f > 0:
            img = img.rotate(angle)
        else:
            img = img.rotate(-angle)
        #随机水平翻转
        flag = random.randint(0,1)
        if flag > 0:
            img = img.transpose(Image.FLIP_LEFT_RIGHT)
    img = np.array(img).astype('float32')
    img = img.transpose((2, 0, 1))
    img = img/255.0
    return img, label
def data_mapper_test(sample):
    img, label = sample
    img = Image.open(img)
    img = img.resize((100, 100), Image.ANTIALIAS)
    img = np.array(img).astype('float32')
    img = img.transpose((2, 0, 1))
    img = img/255.0
    return img, label

5.训练/测试过程可视化

定义了4个list用来存储关键数据:

    trainAccList = list()
    trainLossList =list()

同时对官方例程进行了修改,由于使用的损失函数是交叉熵:loss = fluid.layers.cross_entropy(predict, label),当值出现极大或者极小的时候,loss容易出现nan的情况,因此使用softmaxt把整个数据限制到0-1之间:loss = fluid.layers.softmax_with_cross_entropy(predict, label)。同时学习策略使用余弦退火:fluid.layers.cosine_decay

完整代码:

import os
import time
import random
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
import paddle
import paddle.fluid as fluid
import paddle.fluid.layers as layers
from multiprocessing import cpu_count

import numpy as np
import paddle
import paddle.fluid as fluid
from paddle.fluid.layer_helper import LayerHelper
from paddle.fluid.dygraph.nn import Conv2D, Pool2D, BatchNorm, Linear
from paddle.fluid.dygraph.base import to_variable


# from paddle.fluid.dygraph import Pool2D,Conv2D
# from paddle.fluid.dygraph import Linear
# ResNet中使用了BatchNorm层,在卷积层的后面加上BatchNorm以提升数值稳定性
# 定义卷积批归一化块
class ConvBNLayer(fluid.dygraph.Layer):
    def __init__(self,
                 num_channels,
                 num_filters,
                 filter_size,
                 stride=1,
                 groups=1,
                 act=None,
                 param_attr = fluid.initializer.Xavier(uniform=False)):
        """
        name_scope, 模块的名字
        num_channels, 卷积层的输入通道数
        num_filters, 卷积层的输出通道数
        stride, 卷积层的步幅
        groups, 分组卷积的组数,默认groups=1不使用分组卷积
        act, 激活函数类型,默认act=None不使用激活函数
        """
        super(ConvBNLayer, self).__init__()

        # 创建卷积层
        self._conv = Conv2D(
            num_channels=num_channels,
            num_filters=num_filters,
            filter_size=filter_size,
            stride=stride,
            padding=(filter_size - 1) // 2,
            groups=groups,
            act=None,
            bias_attr=False,
            param_attr=param_attr)

        # 创建BatchNorm层
        self._batch_norm = BatchNorm(num_filters, act=act)

    def forward(self, inputs):
        y = self._conv(inputs)
        y = self._batch_norm(y)
        return y

# 定义残差块
# 每个残差块会对输入图片做三次卷积,然后跟输入图片进行短接
# 如果残差块中第三次卷积输出特征图的形状与输入不一致,则对输入图片做1x1卷积,将其输出形状调整成一致
class BottleneckBlock(fluid.dygraph.Layer):
    def __init__(self,
                 name_scope,
                 num_channels,
                 num_filters,
                 stride,
                 shortcut=True):
        super(BottleneckBlock, self).__init__(name_scope)
        # 创建第一个卷积层 1x1
        self.conv0 = ConvBNLayer(
            num_channels=num_channels,
            num_filters=num_filters,
            filter_size=1,
            act='leaky_relu')
        # 创建第二个卷积层 3x3
        self.conv1 = ConvBNLayer(
            num_channels=num_filters,
            num_filters=num_filters,
            filter_size=3,
            stride=stride,
            act='leaky_relu')
        # 创建第三个卷积 1x1,但输出通道数乘以4
        self.conv2 = ConvBNLayer(
            num_channels=num_filters,
            num_filters=num_filters * 4,
            filter_size=1,
            act=None)

        # 如果conv2的输出跟此残差块的输入数据形状一致,则shortcut=True
        # 否则shortcut = False,添加1个1x1的卷积作用在输入数据上,使其形状变成跟conv2一致
        if not shortcut:
            self.short = ConvBNLayer(
                num_channels=num_channels,
                num_filters=num_filters * 4,
                filter_size=1,
                stride=stride)

        self.shortcut = shortcut

        self._num_channels_out = num_filters * 4

    def forward(self, inputs):
        y = self.conv0(inputs)
        conv1 = self.conv1(y)
        conv2 = self.conv2(conv1)

        # 如果shortcut=True,直接将inputs跟conv2的输出相加
        # 否则需要对inputs进行一次卷积,将形状调整成跟conv2输出一致
        if self.shortcut:
            short = inputs
        else:
            short = self.short(inputs)

        y = fluid.layers.elementwise_add(x=short, y=conv2)
        layer_helper = LayerHelper(self.full_name(), act='relu')
        return layer_helper.append_activation(y)

# 定义ResNet模型
class ResNet(fluid.dygraph.Layer):
    def __init__(self, name_scope, layers=50, class_dim=1):
        """
        name_scope,模块名称
        layers, 网络层数,可以是50, 101或者152
        class_dim,分类标签的类别数
        """
        super(ResNet, self).__init__(name_scope)
        self.layers = layers
        supported_layers = [18, 50, 101, 152]
        assert layers in supported_layers, \
            "supported layers are {} but input layer is {}".format(supported_layers, layers)

        if layers == 50:
            #ResNet50包含多个模块,其中第2到第5个模块分别包含3、4、6、3个残差块
            depth = [3, 4, 6, 3]
        elif layers == 101:
            #ResNet101包含多个模块,其中第2到第5个模块分别包含3、4、23、3个残差块
            depth = [3, 4, 23, 3]
        elif layers == 152:
            #ResNet50包含多个模块,其中第2到第5个模块分别包含3、8、36、3个残差块
            depth = [3, 8, 36, 3]
        elif layers == 18:
            #新建ResNet18
            depth = [2, 2, 2, 2]
        # 残差块中使用到的卷积的输出通道数
        num_filters = [64, 128, 256, 512]

        # ResNet的第一个模块,包含1个7x7卷积,后面跟着1个最大池化层
        self.conv = ConvBNLayer(
            num_channels=3,
            num_filters=64,
            filter_size=7,
            stride=2,
            act='relu')
        self.pool2d_max = Pool2D(
            pool_size=3,
            pool_stride=2,
            pool_padding=1,
            pool_type='max')

        # ResNet的第二到第五个模块c2、c3、c4、c5
        self.bottleneck_block_list = []
        num_channels = 64
        for block in range(len(depth)):
            shortcut = False
            for i in range(depth[block]):
                bottleneck_block = self.add_sublayer(
                    'bb_%d_%d' % (block, i),
                    BottleneckBlock(
                        self.full_name(),
                        num_channels=num_channels,
                        num_filters=num_filters[block],
                        stride=2 if i == 0 and block != 0 else 1, # c3、c4、c5将会在第一个残差块使用stride=2;其余所有残差块stride=1
                        shortcut=shortcut))
                num_channels = bottleneck_block._num_channels_out
                self.bottleneck_block_list.append(bottleneck_block)
                shortcut = True

        # 在c5的输出特征图上使用全局池化
        self.pool2d_avg = Pool2D(pool_size=7, pool_type='avg', global_pooling=True)

        # stdv用来作为全连接层随机初始化参数的方差
        import math
        stdv = 1.0 / math.sqrt(2048 * 1.0)
        
        # 创建全连接层,输出大小为类别数目
        self.out = Linear(input_dim=2048, output_dim=class_dim,
                      param_attr=fluid.param_attr.ParamAttr(
                          initializer=fluid.initializer.Uniform(-stdv, stdv)))

    def forward(self, inputs):
        y = self.conv(inputs)
        y = self.pool2d_max(y)
        for bottleneck_block in self.bottleneck_block_list:
            y = bottleneck_block(y)
        y = self.pool2d_avg(y)
        y = fluid.layers.reshape(y, [y.shape[0], -1])
        y = self.out(y)
        return y
    

def data_mapper(sample):
    img, label = sample
    img = Image.open(img)
    img = img.resize((100, 100), Image.ANTIALIAS)
    img = np.array(img).astype('float32')
    img = img.transpose((2, 0, 1))
    img = img/255.0
    return img, label
# 定义训练集和测试集的reader
def data_mapper_train(sample ,enhance=False):
    img, label = sample
    img = Image.open(img)
    img = img.resize((100, 100), Image.ANTIALIAS)
    if enhance == True:
        # 数据增强
        # 随机逆时针旋转的角度
        angle = random.randint(0,8)
        f = random.randint(0,1)
        if f > 0:
            img = img.rotate(angle)
        else:
            img = img.rotate(-angle)
        #随机水平翻转
        flag = random.randint(0,1)
        if flag > 0:
            img = img.transpose(Image.FLIP_LEFT_RIGHT)
    img = np.array(img).astype('float32')
    img = img.transpose((2, 0, 1))
    img = img/255.0
    return img, label
def data_mapper_test(sample):
    img, label = sample
    img = Image.open(img)
    img = img.resize((100, 100), Image.ANTIALIAS)
    img = np.array(img).astype('float32')
    img = img.transpose((2, 0, 1))
    img = img/255.0
    return img, label
def data_reader(data_list_path, model):
    def reader():
        with open(data_list_path, 'r') as f:
            lines = f.readlines()
            for line in lines:
                img, label = line.split('\t')
                yield img, int(label)
    if model == "train":
        return paddle.reader.xmap_readers(data_mapper_train, reader, cpu_count(), 512)
    elif model == "test":
        return paddle.reader.xmap_readers(data_mapper_test, reader, cpu_count(), 512)
def shuffle_list(readFile, writeFile):
    with open(readFile, 'r') as f:
            lines = f.readlines()
            from random import shuffle
            shuffle(lines) #打乱列表
    with open(writeFile, 'w') as f:
            # print(lines) 
            for data in lines:
                f.write(str(data))
            f.close()
# # 生成图像列表
# data_path = 'data/data23668/Dataset'
# character_folders = os.listdir(data_path)
# # print(character_folders)
# if (os.path.exists('./train_data.list')):
#     os.remove('./train_data.list')
# if (os.path.exists('./test_data.list')):
#     os.remove('./test_data.list')

# for character_folder in character_folders:

#     with open('./train_data.list', 'a') as f_train:
#         with open('./test_data.list', 'a') as f_test:
#             if character_folder == '.DS_Store':
#                 continue
#             character_imgs = os.listdir(os.path.join(data_path, character_folder))
#             count = 0
#             for img in character_imgs:
#                 if img == '.DS_Store':
#                     continue
#                 if count % 10 == 0:
#                     f_test.write(os.path.join(data_path, character_folder, img) + '\t' + character_folder + '\n')
#                 else:
#                     f_train.write(os.path.join(data_path, character_folder, img) + '\t' + character_folder + '\n')
#                 count += 1
# print('列表已生成')
if __name__ == "__main__":
    
    # 打乱训练集
    # shuffle_list('./train_data.list', './shuffle_train_data.list')
    # 用于训练的数据提供器
    train_reader = paddle.batch(reader=paddle.reader.shuffle(reader=data_reader('./shuffle_train_data.list', model="train"), buf_size=256), batch_size=32)
    # 用于测试的数据提供器
    test_reader = paddle.batch(reader=data_reader('./test_data.list', model="test"), batch_size=32)
    testAccList = list()
    testLossList =list()
    trainAccList = list()
    trainLossList =list()
    with fluid.dygraph.guard():
        # model = DensenNet(True)  # 模型实例化
        model = ResNet("ResNet", layers = 18, class_dim = 10)
        model.train()  # 训练模式
        # opt = fluid.optimizer.SGDOptimizer(learning_rate=0.01,
        #                                    parameter_list=model.parameters())  # 优化器选用SGD随机梯度下降,学习率为0.001.
        # opt = fluid.optimizer.Momentum(learning_rate=0.001, momentum=0.9, parameter_list=model.parameters())
        opt=fluid.optimizer.AdamOptimizer(learning_rate=fluid.layers.cosine_decay( learning_rate = 1e-3, step_each_epoch=1000, epochs=60), parameter_list=model.parameters())
        epochs_num = 100  # 迭代次数
    
        for pass_num in range(epochs_num):
            trainACC = 0
            trainLoss = 0
            count = 0
            for batch_id, data in enumerate(train_reader()):
    
                images = np.array([x[0].reshape(3, 100, 100) for x in data], np.float32)
    
                labels = np.array([x[1] for x in data]).astype('int64')
                labels = labels[:, np.newaxis]
                # print(images.shape)
                image = fluid.dygraph.to_variable(images)
                label = fluid.dygraph.to_variable(labels)
                predict = model(image)  # 预测
                # print(predict)
                sf_predict = fluid.layers.softmax(predict)
                # loss = fluid.layers.cross_entropy(predict, label)
                loss = fluid.layers.softmax_with_cross_entropy(predict, label)
                avg_loss = fluid.layers.mean(loss)  # 获取loss值
    
                acc = fluid.layers.accuracy(sf_predict, label)  # 计算精度
                trainACC += acc.numpy()
                trainLoss += avg_loss.numpy()
                if batch_id != 0 and batch_id % 50 == 0:
                    print(
                        "train_pass:{},batch_id:{},train_loss:{},train_acc:{}".format(pass_num, batch_id, avg_loss.numpy(),
                                                                                      acc.numpy()))
    
                avg_loss.backward()
                opt.minimize(avg_loss)
                model.clear_gradients()
                count = batch_id
            trainAccList.append(trainACC/(count + 1))
            trainLossList.append(trainLoss/(count + 1))


        # 绘制
        plt.figure(dpi = 120)    
        train_x = range(len(trainAccList))
        train_y = trainAccList   
        plt.plot(train_x, train_y, label='Train')
     
        plt.legend(loc='upper right')
        plt.ylabel('ACC')
        plt.xlabel('Epoch')
        plt.savefig("ACC.png")
        plt.show()  
            
        plt.figure(dpi = 120)    
        train_x = range(len(trainLossList))
        train_y = trainLossList   
        plt.plot(train_x, train_y, label='Train')
    
        plt.legend(loc='upper right')
        plt.ylabel('Loss')
        plt.xlabel('Epoch')
        plt.savefig("Loss.png")
        plt.show()  
                    
            
        fluid.save_dygraph(model.state_dict(), 'MyDNN')  # 保存模型
    # 模型校验
    with fluid.dygraph.guard():
        accs = []
        model_dict, _ = fluid.load_dygraph('MyDNN')
        # model = DensenNet()
        model = ResNet("ResNet", layers = 18, class_dim = 10)
        model.load_dict(model_dict)  # 加载模型参数
        model.eval()  # 训练模式
        for batch_id, data in enumerate(test_reader()):  # 测试集
            images = np.array([x[0].reshape(3, 100, 100) for x in data], np.float32)
            labels = np.array([x[1] for x in data]).astype('int64')
            labels = labels[:, np.newaxis]
            image = fluid.dygraph.to_variable(images)
            label = fluid.dygraph.to_variable(labels)
            predict = model(image)
            acc = fluid.layers.accuracy(predict, label)
            accs.append(acc.numpy()[0])
            avg_acc = np.mean(accs)
        print(avg_acc)

输出:

......
train_pass:91,batch_id:50,train_loss:[7.75933e-06],train_acc:[1.]
train_pass:92,batch_id:50,train_loss:[9.98374e-07],train_acc:[1.]
train_pass:93,batch_id:50,train_loss:[4.098817e-05],train_acc:[1.]
train_pass:94,batch_id:50,train_loss:[1.8738103e-06],train_acc:[1.]
train_pass:95,batch_id:50,train_loss:[5.029129e-07],train_acc:[1.]
train_pass:96,batch_id:50,train_loss:[1.7801412e-05],train_acc:[1.]
train_pass:97,batch_id:50,train_loss:[7.979372e-06],train_acc:[1.]
train_pass:98,batch_id:50,train_loss:[1.0058262e-06],train_acc:[1.]
train_pass:99,batch_id:50,train_loss:[1.906916e-05],train_acc:[1.]
1.0

在这里插入图片描述
在这里插入图片描述
在云端训练测试集准确99%,算是不错的精度:
在这里插入图片描述

6.模型测试

#读取预测图像,进行预测
def load_image(path):
    img = Image.open(path)
    img = img.resize((100, 100), Image.ANTIALIAS)
    img = np.array(img).astype('float32')
    img = img.transpose((2, 0, 1))
    img = img/255.0
    print(img.shape)
    return img

#构建预测动态图过程
with fluid.dygraph.guard():
    infer_path = '手势.JPG'
    model=ResNet("ResNet", layers = 18, class_dim = 10)
    model_dict,_=fluid.load_dygraph('MyDNN')
    model.load_dict(model_dict)#加载模型参数
    model.eval()#评估模式
    infer_img = load_image(infer_path)
    infer_img=np.array(infer_img).astype('float32')
    infer_img=infer_img[np.newaxis,:, : ,:]
    infer_img = fluid.dygraph.to_variable(infer_img)
    result=model(infer_img)
    display(Image.open('手势.JPG'))
    print(np.argmax(result.numpy()))

输出:

(3, 100, 100)
5

在这里插入图片描述

发布了87 篇原创文章 · 获赞 66 · 访问量 2万+

猜你喜欢

转载自blog.csdn.net/qq_24739717/article/details/105256361
今日推荐