tensorflow2_tf.keras实现softmax多分类以及网络优化与超参数选择再探之使用卷积

本文使用的数据集和https://blog.csdn.net/ABCDABCD321123/article/details/104734947 一致。

import tensorflow as tf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#加载数据集
(train_image, train_lable), (test_image, test_label) = tf.keras.datasets.fashion_mnist.load_data()
train_image.shape

(60000, 28, 28)

增加channel通道

train_image = tf.expand_dims(train_image,-1)
test_image = tf.expand_dims(test_image,-1)
train_image.shape

TensorShape([60000, 28, 28, 1])

归一化

train_image = tf.cast(train_image,dtype=tf.float32)
test_image = tf.cast(test_image,dtype=tf.float32)
#归一化
train_image = train_image/255.
test_image = test_image/255.

构建卷积网络模型

#构建模型
model = tf.keras.Sequential()
model.add(tf.keras.layers.Conv2D(64,(3,3),input_shape=(28,28,1),activation='relu'))
model.add(tf.keras.layers.Conv2D(64,(3,3),activation = 'relu'))
model.add(tf.keras.layers.MaxPooling2D())
model.add(tf.keras.layers.Conv2D(128,(3,3),activation='relu'))
model.add(tf.keras.layers.Conv2D(128,(3,3),activation= 'relu'))
model.add(tf.keras.layers.MaxPooling2D())
model.add(tf.keras.layers.Conv2D(256,(3,3),activation='relu'))
model.add(tf.keras.layers.GlobalAveragePooling2D())
model.add(tf.keras.layers.Dense(256,activation='relu'))
model.add(tf.keras.layers.Dense(10,activation='softmax'))

model.summary()

编译训练模型

model.compile(optimizer = 'adam',
              loss = 'sparse_categorical_crossentropy',
              metrics=['acc']
)
history = model.fit(train_image,
                    train_lable,
                    epochs=5,
                    batch_size = 64,
                    validation_data = (test_image,test_label)
                   )

我的电脑配置不高,特地减少了epoch,还是等待良久才训练完,结果如下:

Train on 60000 samples, validate on 10000 samples

Epoch 1/5 60000/60000 [==============================] - 168s 3ms/sample - loss: 0.5402 - acc: 0.7964 - val_loss: 0.3713 - val_acc: 0.8655

Epoch 2/5 60000/60000 [==============================] - 182s 3ms/sample - loss: 0.3112 - acc: 0.8848 - val_loss: 0.3110 - val_acc: 0.8812

Epoch 3/5 60000/60000 [==============================] - 184s 3ms/sample - loss: 0.2552 - acc: 0.9056 - val_loss: 0.2893 - val_acc: 0.8968

Epoch 4/5 60000/60000 [==============================] - 174s 3ms/sample - loss: 0.2241 - acc: 0.9166 - val_loss: 0.2724 - val_acc: 0.9054

Epoch 5/5 60000/60000 [==============================] - 169s 3ms/sample - loss: 0.1971 - acc: 0.9277 - val_loss: 0.2243 - val_acc: 0.9184

我们看到经过5个epoch,在训练集和测试集上的acc都达到了0.92

从图可以看出随着epoch的增加训练集和测试集上的acc还在增加,到底是不是这样呢?虽然电脑配置不高,我还是把epoch改为15,等待。。。,终于出来了

Train on 60000 samples, validate on 10000 samples

Epoch 1/15 60000/60000 [==============================] - 169s 3ms/sample - loss: 0.5399 - acc: 0.7977 - val_loss: 0.3638 - val_acc: 0.8721

Epoch 2/15 60000/60000 [==============================] - 174s 3ms/sample - loss: 0.3094 - acc: 0.8862 - val_loss: 0.2941 - val_acc: 0.8920

Epoch 3/15 60000/60000 [==============================] - 214s 4ms/sample - loss: 0.2573 - acc: 0.9049 - val_loss: 0.2687 - val_acc: 0.9013

Epoch 4/15 60000/60000 [==============================] - 213s 4ms/sample - loss: 0.2237 - acc: 0.9185 - val_loss: 0.2488 - val_acc: 0.9107

Epoch 5/15 60000/60000 [==============================] - 214s 4ms/sample - loss: 0.1985 - acc: 0.9259 - val_loss: 0.2451 - val_acc: 0.9115

Epoch 6/15 60000/60000 [==============================] - 213s 4ms/sample - loss: 0.1726 - acc: 0.9368 - val_loss: 0.2222 - val_acc: 0.9204

Epoch 7/15 60000/60000 [==============================] - 203s 3ms/sample - loss: 0.1520 - acc: 0.9444 - val_loss: 0.2500 - val_acc: 0.9153

Epoch 8/15 60000/60000 [==============================] - 165s 3ms/sample - loss: 0.1340 - acc: 0.9501 - val_loss: 0.2352 - val_acc: 0.9190s

Epoch 9/15 60000/60000 [==============================] - 175s 3ms/sample - loss: 0.1174 - acc: 0.9570 - val_loss: 0.2540 - val_acc: 0.9154

Epoch 10/15 60000/60000 [==============================] - 189s 3ms/sample - loss: 0.0995 - acc: 0.9632 - val_loss: 0.2566 - val_acc: 0.9252

Epoch 11/15 60000/60000 [==============================] - 223s 4ms/sample - loss: 0.0853 - acc: 0.9679 - val_loss: 0.2889 - val_acc: 0.9204

Epoch 12/15 60000/60000 [==============================] - 212s 4ms/sample - loss: 0.0737 - acc: 0.9724 - val_loss: 0.3092 - val_acc: 0.9209

Epoch 13/15 60000/60000 [==============================] - 183s 3ms/sample - loss: 0.0653 - acc: 0.9761 - val_loss: 0.3298 - val_acc: 0.9182

Epoch 14/15 60000/60000 [==============================] - 198s 3ms/sample - loss: 0.0565 - acc: 0.9787 - val_loss: 0.3489 - val_acc: 0.9209

Epoch 15/15 60000/60000 [==============================] - 199s 3ms/sample - loss: 0.0533 - acc: 0.9804 - val_loss: 0.3458 - val_acc: 0.9237

嗯,训练集上的准确度提高很多,测试集上表现不佳,过拟合了。解决办法一般是添加Dropout层和正则化惩罚项。由于电脑不给力,请诸位自行测试。

发布了23 篇原创文章 · 获赞 1 · 访问量 3345

猜你喜欢

转载自blog.csdn.net/ABCDABCD321123/article/details/104773163
今日推荐