TF2.0模型训练

概述

上一节TF2.0模型创建介绍了三种模型创建的方法,接下来将介绍如何训练模型,这里我会介绍用以下三种方法去演示读取数据并训练模型。
本节教程仅用图像分类举例
1、通过fit方法训练模型
2、通过fit_generator方法训练模型
3、自定义训练
注:本节部分代码参考了官方教程用 tf.data 加载图片

数据集介绍

该数据集为tf_flowers,数据集为五种花朵数据集,分别为雏菊(daisy),郁金香(tulips),向日葵(sunflowers),玫瑰(roses),蒲公英(dandelion)。

import pathlib
from tensorflow.keras.utils import get_file

data_root = get_file(origin='https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
                     fname='flower_photos', 
                     untar=True, 
                     cache_dir='./', 
                     cache_subdir='datasets')
                     
data_path = pathlib.Path(data_root)

print("data_path:",data_path)
for item in data_path.iterdir():
    print(item)

运行输出:

Downloading data from https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz
228818944/228813984 [==============================] - 1s 0us/step
data_path: datasets/flower_photos
datasets/flower_photos/daisy
datasets/flower_photos/tulips
datasets/flower_photos/sunflowers
datasets/flower_photos/roses
datasets/flower_photos/LICENSE.txt
datasets/flower_photos/dandelion

1、通过fit方法训练模型

第一种:通过fit方法训练模型
步骤:
1、准备数据
2、创建模型
3、编译模型
4、训练模型

准备数据

获取所有花朵图片的路径

import random

all_image_paths = list(data_path.glob('*/*'))#获取子目录下所有文件
all_image_paths = [str(path) for path in all_image_paths]#把<class 'pathlib.WindowsPath'>转换成str类型
random.shuffle(all_image_paths)#打乱顺序
print(all_image_paths[0])
#输出:
#datasets\flower_photos\roses\3422228549_f147d6e642.jpg

获取所有花朵的标签

label_names = []
for item in data_path.glob('*/'):#获取目录下所有文件
    if item.is_dir():#判断是否是文件夹
        label_names.append(item.name)
    label_names.sort()#整理一下

label_name_index = dict((name, index) for index, name in enumerate(label_names))
print(label_names)
print(label_name_index)
#输出:
#['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips']
#{'daisy': 0, 'dandelion': 1, 'roses': 2, 'sunflowers': 3, 'tulips': 4}

#获取文件的目录,得到的目录根据字典映射出标签
all_image_labels = [label_name_index[pathlib.Path(path).parent.name]for path in all_image_paths]
print(all_image_labels[0])
#输出:
#2

定义一些变量

input_shape=(192,192,3)
classes    =len(label_names)
batch_size =64
epochs     =10
steps_per_epoch=len(all_image_paths)//batch_size

现在我们有了所有图片的路径all_image_paths,以及标签all_image_labels
现在我们写一个函数load_preprocess_image去加载并处理图片,make_image_label_datasets这个函数用来将图片和标签整合到一起

import tensorflow as tf

def load_preprocess_image(image_paths):
    image = tf.io.read_file(image_paths)            #img_string
    image = tf.image.decode_jpeg(image, channels=3) #img_tensor
    image = tf.image.resize(image, [192,192])       #img_resize
    image = image/255.0                             #img_normal    
    return image

def make_image_label_datasets(image_paths, image_labels):
    return load_preprocess_image(image_paths), image_labels

制作数据集

datasets = tf.data.Dataset.from_tensor_slices((all_image_paths, all_image_labels))
image_label_datasets = datasets.map(make_image_label_datasets)

取出两个图片及标签进行可视化

import matplotlib.pyplot as plt
import numpy as np

plt.figure(figsize=(6,6))
n=0
for img,leb in image_label_datasets.take(2):
    n=n+1
    image=np.array(img.numpy()*255.0).astype("uint8")
    plt.subplot(1,2,n)
    plt.title('lebel:'+str(leb.numpy()))
    plt.imshow(image)
    plt.show()

创建模型

考虑到是入门教学,这里不进行迁移学习,我们来创建一个类似VGG系列的模型,这里的创建方法用到的是上一节所说的方法二

from tensorflow.keras import Model
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Input
def my_model(input_shape, classes):
    inputs=Input(input_shape)
    # Block 1
    x = Conv2D(64,  (3, 3), activation='relu', padding='same')(inputs)
    x = MaxPooling2D((2, 2), strides=(2, 2))(x)
    # Block 2
    x = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)
    # Block 3
    x = Conv2D(256, (3, 3), activation='relu', padding='same')(x)
    x = Conv2D(256, (3, 3), activation='relu', padding='same')(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)
    
    x = Flatten()(x)
    x = Dense(512, activation='relu')(x)
    x = Dense(256, activation='relu')(x)
    x = Dense(classes, activation='softmax')(x)
    model = Model(inputs, x)
    return model
model = my_model(input_shape, classes)

编译模型

优化器用的是Adam优化器,因为标签是0,1,2,…而不是one-hot 编码[1, 0, 0,…], [0, 1 0,…], [0, 0, 1,…]。所以损失函数用sparse_categorical_crossentropy而不是categorical_crossentropy

from tensorflow.keras.optimizers import Adam
opt=Adam()
model.compile(optimizer=opt,
              loss='sparse_categorical_crossentropy',
              metrics=["accuracy"])

训练模型

image_label_datasets = image_label_datasets.shuffle(buffer_size=len(all_image_paths))
image_label_datasets = image_label_datasets.repeat()
image_label_datasets = image_label_datasets.batch(batch_size)
# 当模型在训练的时候,`prefetch` 使数据集在后台取得 batch,也就是流水线进行。
image_label_datasets = image_label_datasets.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)

model.fit(image_label_datasets, epochs=epochs, steps_per_epoch=steps_per_epoch)

运行输出:

Train for 57 steps
Epoch 1/10
57/57 [==============================] - 32s 569ms/step - loss: 1.4404 - accuracy: 0.4046
Epoch 2/10
57/57 [==============================] - 21s 377ms/step - loss: 1.0551 - accuracy: 0.5762
Epoch 3/10
57/57 [==============================] - 21s 368ms/step - loss: 0.9082 - accuracy: 0.6417
Epoch 4/10
57/57 [==============================] - 21s 363ms/step - loss: 0.7993 - accuracy: 0.6853
Epoch 5/10
57/57 [==============================] - 21s 360ms/step - loss: 0.6667 - accuracy: 0.7410
Epoch 6/10
57/57 [==============================] - 19s 337ms/step - loss: 0.4645 - accuracy: 0.8331
Epoch 7/10
57/57 [==============================] - 17s 299ms/step - loss: 0.3154 - accuracy: 0.8890
Epoch 8/10
57/57 [==============================] - 15s 257ms/step - loss: 0.2015 - accuracy: 0.9328
Epoch 9/10
57/57 [==============================] - 14s 254ms/step - loss: 0.1692 - accuracy: 0.9487
Epoch 10/10
57/57 [==============================] - 14s 253ms/step - loss: 0.1205 - accuracy: 0.9638
<tensorflow.python.keras.callbacks.History at 0x7fb7d4467f28>

2、通过fit_generator方法训练模型

第二种:通过fit_generator方法训练模型
通过实践方法一,如果你是新手的话,你一定感受到了制作数据是一件很麻烦的事。
接下来我们将介绍使用ImageDataGenerator类及其flow_from_directory方法进行便捷地读取数据进行训练
步骤:
1、构建生成器
2、创建模型
3、编译模型
4、训练模型

构建生成器

ImageDataGenerator类及其flow_from_directory方法是有很多参数可用的,详情可以点击去官网看手册(如果你是新手的话,看手册会很频繁哦)

from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os

def make_Gen(data_path):
    train_dataNums = 0
    train_gen  = ImageDataGenerator(rescale=1/255.0)#只用归一化
  
    for root, dirs, files in os.walk(data_path):
        for file in files:
            train_dataNums += 1

    return train_gen, train_dataNums

data_path='datasets/flower_photos'
train_gen, train_dataNums = make_Gen(data_path)
train_generator = train_gen.flow_from_directory(
    directory   = data_path,
    target_size = (192,192),
    batch_size  = batch_size,
    class_mode  = 'categorical')#class_mode选'categorical',这时它会自动帮我们处理图片和标签
    
print(train_generator.class_indices)
#输出:
#Found 3670 images belonging to 5 classes.
#{'daisy': 0, 'dandelion': 1, 'roses': 2, 'sunflowers': 3, 'tulips': 4}

创建模型

使用方法一创建的模型

model = my_model((192,192,3), 5)

编译模型

损失函数用categorical_crossentropy

from tensorflow.keras.optimizers import Adam
opt=Adam()
model.compile(loss='categorical_crossentropy', 
              optimizer=opt, 
              metrics=['accuracy'])

训练模型

model.fit_generator(train_generator,
                    steps_per_epoch =train_dataNums//batch_size,
                    epochs=epochs)

运行输出:
根据结果可以看到,这种方法速度比较慢,相同轮数下收敛也没这么快,原因可以自己思考一下哦。

Epoch 1/10
57/57 [==============================] - 27s 470ms/step - loss: 1.7245 - accuracy: 0.3236
Epoch 2/10
57/57 [==============================] - 24s 428ms/step - loss: 1.3447 - accuracy: 0.4010
Epoch 3/10
57/57 [==============================] - 24s 421ms/step - loss: 1.2717 - accuracy: 0.4323
Epoch 4/10
57/57 [==============================] - 24s 425ms/step - loss: 1.2436 - accuracy: 0.4507
Epoch 5/10
57/57 [==============================] - 24s 418ms/step - loss: 1.1845 - accuracy: 0.4907
Epoch 6/10
57/57 [==============================] - 24s 422ms/step - loss: 1.0594 - accuracy: 0.5657
Epoch 7/10
57/57 [==============================] - 24s 427ms/step - loss: 0.8960 - accuracy: 0.6521
Epoch 8/10
57/57 [==============================] - 24s 417ms/step - loss: 0.6565 - accuracy: 0.7570
Epoch 9/10
57/57 [==============================] - 24s 419ms/step - loss: 0.4401 - accuracy: 0.8464
Epoch 10/10
57/57 [==============================] - 24s 418ms/step - loss: 0.2753 - accuracy: 0.9121
<tensorflow.python.keras.callbacks.History at 0x7fb77d69ae10>

3、自定义训练

第三种:自定义训练
有时候,一些繁杂的任务,或者说你想根据自己的想法进行更多的选择以及自定义,这时你就可以进行自定义训练

准备数据

见方法一:准备数据

创建模型

见方法一:创建模型

定义损失函数及优化器

from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import SparseCategoricalCrossentropy

#模型最后的输出是通过了softmax激活函数,所以这里from_logits=False
my_loss=SparseCategoricalCrossentropy(from_logits=False)
my_opt =Adam()
def loss(real, pred):
    loss=my_loss(real, pred)
    return loss

@tf.function带上这句可以加速,train_per_step该函数求每一个steploss梯度更新变量
这里用到了tensorflow.GradientTape类,GradientTape会监控可训练变量,详情可查看文档。

@tf.function
def train_per_step(inputs, targets):
    with tf.GradientTape() as tape:
        predicts=model(inputs)
        #求loss
        loss_value = loss(real=targets,pred=predicts)
	#根据损失求梯度
    gradients=tape.gradient(loss_value, model.trainable_variables)  
    #把梯度和变量进行绑定
    grads_and_vars=zip(gradients, model.trainable_variables)  
    #进行梯度更新
    my_opt.apply_gradients(grads_and_vars)
    
    return loss_value

训练模型

打乱数据集,设定batch_size

epochs     = 10
batch_size = 64
#打乱并设定batch_size
image_label_datasets = image_label_datasets.shuffle(buffer_size=len(all_image_paths))
image_label_datasets = image_label_datasets.batch(batch_size, drop_remainder=True)

开始训练
这里用到了tensorflow.keras.metrics.Mean类以及tensorflow.keras.metrics.SparseCategoricalAccuracy类,它们都有三个Methods(reset_states, result, update_state),详情可以看手册。

import time 

train_loss_results = []#保存loss值
train_accuracy_results = []#保存accuracy值

for epoch in range(epochs):
    start = time.time()
    
    #注意这两行代码是在epochs的for循环里面,
    #每次循环之后会进行重置(重新赋值),所以不用加reset_states()方法
    epoch_loss_avg = tf.keras.metrics.Mean()
    epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()

    for image, label in image_label_datasets:

        batch_loss = train_per_step(image, label)
		#求平均,只要不调用reset_states()方法,之前的值是会累计下来的
        epoch_loss_avg(batch_loss)
        epoch_accuracy(label, model(image))
	#保存loss、accuracy值,可用于可视化
    train_loss_results.append(epoch_loss_avg.result())
    train_accuracy_results.append(epoch_accuracy.result())
    #每一个epoch后打印Loss、Accuracy以及花费的时间
    print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch,
                                                                epoch_loss_avg.result(),
                                                                epoch_accuracy.result()))
    print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))

运行输出:

Epoch 000: Loss: 1.128, Accuracy: 54.441%
Time taken for 1 epoch 18.131535291671753 sec

Epoch 001: Loss: 1.002, Accuracy: 62.582%
Time taken for 1 epoch 18.448741674423218 sec

Epoch 002: Loss: 0.895, Accuracy: 66.859%
Time taken for 1 epoch 18.31860089302063 sec

Epoch 003: Loss: 0.761, Accuracy: 74.397%
Time taken for 1 epoch 17.966360569000244 sec

Epoch 004: Loss: 0.585, Accuracy: 81.168%
Time taken for 1 epoch 17.9322772026062 sec

Epoch 005: Loss: 0.410, Accuracy: 89.200%
Time taken for 1 epoch 18.117868900299072 sec

Epoch 006: Loss: 0.269, Accuracy: 94.545%
Time taken for 1 epoch 17.976419687271118 sec

Epoch 007: Loss: 0.139, Accuracy: 97.478%
Time taken for 1 epoch 17.916046380996704 sec

Epoch 008: Loss: 0.094, Accuracy: 98.629%
Time taken for 1 epoch 18.384119987487793 sec

Epoch 009: Loss: 0.154, Accuracy: 98.438%
Time taken for 1 epoch 17.962616682052612 sec

下一节

TF2.0模型保存及加载

总结

待更新。。。。。。

发布了2 篇原创文章 · 获赞 3 · 访问量 118

猜你喜欢

转载自blog.csdn.net/qq_40661327/article/details/104225109