Tensorflow2.0学习(14):简单的卷积神经网络

卷积神经网络

  • 参考以下两个链接
    • https://blog.csdn.net/weixin_42398658/article/details/84392845
    • https://blog.csdn.net/stdcoutzyx/article/details/41596663

实战

  • 导包
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import sklearn
import pandas as pd
import os
import sys
import time
import tensorflow as tf
from tensorflow import keras
print(tf.__version__)
print(sys.version_info)
for module in mpl, np ,pd, sklearn, tf, keras:
    print(module.__name__, module.__version__)
2.1.0
sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0)
matplotlib 3.1.1
numpy 1.16.5
pandas 0.25.1
sklearn 0.21.3
tensorflow 2.1.0
tensorflow_core.python.keras.api._v2.keras 2.2.4-tf
  • 读取、处理数据
# 读取keras中的进阶版mnist数据集
fashion_mnist = keras.datasets.fashion_mnist
# 加载数据集,切分为训练集和测试集
(x_train_all, y_train_all),(x_test, y_test) = fashion_mnist.load_data()
# 从训练集中将后五千张作为验证集,前五千张作为训练集
# [:5000]默认从头开始,从头开始取5000个
# [5000:]从第5000开始(不包含5000),结束位置默认为最后
x_valid, x_train = x_train_all[:5000],x_train_all[5000:]
y_valid, y_train = y_train_all[:5000],y_train_all[5000:]
# 打印这些数据集的大小
print(x_valid.shape, y_valid.shape)
print(x_train.shape, y_train.shape)
print(x_test.shape, y_test.shape)
(5000, 28, 28) (5000,)
(55000, 28, 28) (55000,)
(10000, 28, 28) (10000,)
# 归一化处理:x = (x - u)/std :减去均值除以方差

from sklearn.preprocessing import StandardScaler
# 初始化一个StandarScaler对象
scaler = StandardScaler()
# fit_transform要求为二维矩阵,因此需要先转换
# 要进行除法,因此先转化为浮点型
# x_train是三维矩阵[None,28,28],先将其转换为二维矩阵[None,784],再将其转回三维矩阵
# reshape(-1, 1)转化为一列(-1代表不确定几行)
# fit: 求得训练集的均值、方差、最大值、最小值等训练集固有的属性
# transform: 在fit的基础上,进行标准化,降维,归一化等操作
x_train_scaled = scaler.fit_transform(
    x_train.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28, 1)
x_valid_scaled = scaler.transform(
    x_valid.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28, 1)
x_test_scaled = scaler.transform(
    x_test.astype(np.float32).reshape(-1, 1)).reshape(-1, 28, 28, 1)
  • 构建CNN模型
# tf.keras.models.Sequential() 构建模型 
# 构建深度神经网络
model = keras.models.Sequential()
# 添加卷积层
# filter:卷积核的个数, kernel_size:卷积核的尺寸, padding: 是否填充原图像
# avtivation: 激活函数, input_shape:输入的图像的大小,为1通道
model.add(keras.layers.Conv2D(filters=32, kernel_size=3,
                             padding='same',
                             activation="selu",
                             input_shape=(28, 28 ,1)))
model.add(keras.layers.Conv2D(filters=3, kernel_size=3,
                             padding='same',
                             activation="selu"
                             ))
# 添加池化层
# 经过池化层后,图像长宽各减少1/2,面积减少1/4,因此会造成图像的损失
# 所以在之后的卷积层中,卷积核的个数翻倍以缓解这种损失
model.add(keras.layers.MaxPool2D(pool_size=2))
model.add(keras.layers.Conv2D(filters=64, kernel_size=3,
                             padding='same',
                             activation="selu"
                             ))
model.add(keras.layers.Conv2D(filters=64, kernel_size=3,
                             padding='same',
                             activation="selu"
                             ))
model.add(keras.layers.MaxPool2D(pool_size=2))


model.add(keras.layers.Conv2D(filters=128, kernel_size=3,
                             padding='same',
                             activation="selu"
                             ))
model.add(keras.layers.Conv2D(filters=128, kernel_size=3,
                             padding='same',
                             activation="selu"
                             ))
model.add(keras.layers.MaxPool2D(pool_size=2))
# 将输出展平
model.add(keras.layers.Flatten())
# 连接全连接层
model.add(keras.layers.Dense(128,activation="selu"))
model.add(keras.layers.Dense(10,activation="softmax"))
model.compile(loss="sparse_categorical_crossentropy",
             optimizer="sgd",
             metrics = ["accuracy"])
  • 查看模型结构
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 28, 28, 32)        320       
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 28, 28, 3)         867       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 14, 14, 3)         0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 14, 14, 64)        1792      
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 14, 14, 64)        36928     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 7, 7, 64)          0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 7, 7, 128)         73856     
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 7, 7, 128)         147584    
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 3, 3, 128)         0         
_________________________________________________________________
flatten (Flatten)            (None, 1152)              0         
_________________________________________________________________
dense (Dense)                (None, 128)               147584    
_________________________________________________________________
dense_1 (Dense)              (None, 10)                1290      
=================================================================
Total params: 410,221
Trainable params: 410,221
Non-trainable params: 0
_________________________________________________________________
  • 训练等
# 开启训练
# epochs:训练集遍历10次
# validation_data:每隔一段时间就会验证集验证
# 会发现loss和accuracy到后面一直不变,因为用sgd梯度下降法会导致陷入局部最小值点
# 因此将loss函数的下降方法改为 adam

# callbcaks:回调函数,在每次迭代之后自动调用一些进程,如判断loss值是否达到要求
# 因此callbacks需要加在训练的过程中,即加在fit中
# 此处使用 Tensorboard, earlystopping, ModelCheckpoint 回调函数

# Tensorboard需要一个文件夹,ModelCheckpoint需要一个文件名
# 因此先创建一个文件夹和文件名

logdir = os.path.join("cnn-selu-callbacks")
if not os.path.exists(logdir):
    os.mkdir(logdir)
# 在callbacks文件夹下创建文件。c=os.path.join(a,b),c=a/b
output_model_file = os.path.join(logdir,"fashion_mnist_model.h5")


callbacks = [
    keras.callbacks.TensorBoard(log_dir=logdir),
    keras.callbacks.ModelCheckpoint(output_model_file,
                                   save_best_only=True),
    keras.callbacks.EarlyStopping(patience=5, min_delta=1e-3),
]
history = model.fit(x_train_scaled, y_train, epochs=10,
                    validation_data=(x_valid_scaled, y_valid),
                   callbacks = callbacks)
# 查看tensorboard:
# 1.在所在的环境下,进入callbacks文件夹所在的目录
# 2.输入:tensorboard --logdir="callbacks"
# 3.打开浏览器:输入localhost:(端口号)
Train on 55000 samples, validate on 5000 samples
Epoch 1/10
55000/55000 [==============================] - 155s 3ms/sample - loss: 0.4501 - accuracy: 0.8370 - val_loss: 0.3343 - val_accuracy: 0.8812
Epoch 2/10
55000/55000 [==============================] - 168s 3ms/sample - loss: 0.3022 - accuracy: 0.8898 - val_loss: 0.2922 - val_accuracy: 0.8946
Epoch 3/10
55000/55000 [==============================] - 161s 3ms/sample - loss: 0.2558 - accuracy: 0.9066 - val_loss: 0.2948 - val_accuracy: 0.8926
Epoch 4/10
55000/55000 [==============================] - 156s 3ms/sample - loss: 0.2238 - accuracy: 0.9187 - val_loss: 0.2599 - val_accuracy: 0.9068
Epoch 5/10
55000/55000 [==============================] - 151s 3ms/sample - loss: 0.1973 - accuracy: 0.9283 - val_loss: 0.2458 - val_accuracy: 0.9112
Epoch 6/10
55000/55000 [==============================] - 158s 3ms/sample - loss: 0.1742 - accuracy: 0.9371 - val_loss: 0.2477 - val_accuracy: 0.9102
Epoch 7/10
55000/55000 [==============================] - 154s 3ms/sample - loss: 0.1529 - accuracy: 0.9448 - val_loss: 0.2383 - val_accuracy: 0.9142
Epoch 8/10
55000/55000 [==============================] - 147s 3ms/sample - loss: 0.1339 - accuracy: 0.9518 - val_loss: 0.2625 - val_accuracy: 0.9100
Epoch 9/10
55000/55000 [==============================] - 150s 3ms/sample - loss: 0.1157 - accuracy: 0.9580 - val_loss: 0.2491 - val_accuracy: 0.9164
Epoch 10/10
55000/55000 [==============================] - 152s 3ms/sample - loss: 0.0995 - accuracy: 0.9639 - val_loss: 0.2732 - val_accuracy: 0.9146
def plot_learning_curves(history):
    # 将history.history转换为dataframe格式
    pd.DataFrame(history.history).plot(figsize=(8, 5 ))
    plt.grid(True)
    # gca:get current axes,gcf: get current figure
    plt.gca().set_ylim(0, 3)
    plt.show()
plot_learning_curves(history)

# 前期loss的基本不变化的原因
# 1.参数众多,训练不充分
# 2.梯度消失

在这里插入图片描述

model.evaluate(x_test_scaled, y_test, verbose=2)

10000/10000 - 8s - loss: 0.3132 - accuracy: 0.9030


[0.3131856933772564, 0.903]
发布了35 篇原创文章 · 获赞 3 · 访问量 2497

猜你喜欢

转载自blog.csdn.net/Smile_mingm/article/details/104561519