tensorflow2_tf.keras实现softmax多分类以及网络优化与超参数选择初步探讨

使用的数据集是tensorflow2自带的fashion-mnist数据集,28X28的衣服鞋子包包之类的图片。

fashion-mnist数据集下载地址:链接:https://pan.baidu.com/s/1G6LLRK-YaemylDt5bP-yag  提取码:n4pl

使用tf.keras.datasets.fashion_mnist.load_data()方法加载数据集,若第一次加载程序会从国外网上下载数据集,比较慢,本文提供了网盘的下载链接,下载后将其拷贝到.keras/datasets 目录下,如下

准备好数据集,现在开始我们的探索之旅,小二上代码~

import tensorflow as tf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#加载数据集
(train_image, train_lable), (test_image, test_label) = tf.keras.datasets.fashion_mnist.load_data()

查看train_image shape

train_image.shape

输出:(60000, 28, 28)

训练集 60000张28*28图片

查看test_image shape

test_image.shape

输出:(10000, 28, 28)

测试集 10000张28*28图片

查看  train_lable 

train_label

输出:array([9, 0, 0, ..., 3, 0, 5], dtype=uint8)

train_lable.max(),train_lable.min()

输出:(9, 0)

可见标签是从0到9,10个类别

打印一张图片看看

plt.imshow(train_image[1])

ok,现在正式开始

对数据进行归一化处理,像素范围是0到255,所以都除以255,归一化到(0,1)之间

train_image = train_image/255
test_image = test_image/255

构建网络,构建个什么样的网络呢?

先从含有一个隐藏层的简单网络开始吧,隐藏层神经元为多少合适呢?

由于输入数据包含很多信息所以神经元个数不能太少,太少的化会丢弃很多有用的信息,额~我不喜欢太多的参数,那就暂定为64吧。

model = tf.keras.Sequential()
#添加Flatten层将(28,28)的数据变成[28*28]
model.add(tf.keras.layers.Flatten(input_shape=(28,28)))
model.add(tf.keras.layers.Dense(64,activation='relu'))
model.add(tf.keras.layers.Dense(10,activation='softmax'))
model.summary()

可训练参数有50890个。

下一步,设置优化器和损失函数,用常用的adam优化器, Adam算法通常被认为对超参数的选择相当鲁棒 ,可以看做是修正后的Momentum+RMSProp算法,学习率建议为0.001。

softmax的损失函数有两种:

categorical_crossentropy  label为one hot 编码时使用

sparse_categorical_crossentropy 我们数据集的label是0到9的数字,不是one hot 编码,所以我们此处选它。

model.compile(optimizer = 'adam',
              loss = 'sparse_categorical_crossentropy',
              metrics=['acc']
)

现在终于可以训练了

history = model.fit(train_image,train_lable,epochs=20)

Train on 60000 samples Epoch 1/20 60000/60000 [==============================] - 3s 46us/sample - loss: 0.5240 - acc: 0.8194 Epoch 2/20 60000/60000 [==============================] - 2s 39us/sample - loss: 0.3957 - acc: 0.8596 Epoch 3/20 60000/60000 [==============================] - 2s 40us/sample - loss: 0.3555 - acc: 0.8726 Epoch 4/20 60000/60000 [==============================] - 2s 41us/sample - loss: 0.3311 - acc: 0.8793 Epoch 5/20 60000/60000 [==============================] - 2s 41us/sample - loss: 0.3132 - acc: 0.8866 Epoch 6/20 60000/60000 [==============================] - 2s 41us/sample - loss: 0.3008 - acc: 0.8898 Epoch 7/20 60000/60000 [==============================] - 2s 41us/sample - loss: 0.2895 - acc: 0.8942 Epoch 8/20 60000/60000 [==============================] - 3s 43us/sample - loss: 0.2779 - acc: 0.8981 Epoch 9/20 60000/60000 [==============================] - 3s 45us/sample - loss: 0.2699 - acc: 0.9000 Epoch 10/20 60000/60000 [==============================] - 3s 44us/sample - loss: 0.2616 - acc: 0.9028 Epoch 11/20 60000/60000 [==============================] - 3s 43us/sample - loss: 0.2545 - acc: 0.9062 Epoch 12/20 60000/60000 [==============================] - 3s 43us/sample - loss: 0.2481 - acc: 0.9086 Epoch 13/20 60000/60000 [==============================] - 3s 43us/sample - loss: 0.2424 - acc: 0.9105 Epoch 14/20 60000/60000 [==============================] - 3s 44us/sample - loss: 0.2366 - acc: 0.9122 Epoch 15/20 60000/60000 [==============================] - 3s 44us/sample - loss: 0.2312 - acc: 0.9143 Epoch 16/20 60000/60000 [==============================] - 3s 45us/sample - loss: 0.2263 - acc: 0.9157 Epoch 17/20 60000/60000 [==============================] - 3s 43us/sample - loss: 0.2206 - acc: 0.9181 Epoch 18/20 60000/60000 [==============================] - 3s 42us/sample - loss: 0.2179 - acc: 0.9200 Epoch 19/20 60000/60000 [==============================] - 3s 43us/sample - loss: 0.2113 - acc: 0.9204 Epoch 20/20 60000/60000 [==============================] - 3s 43us/sample - loss: 0.2082 - acc: 0.9221

训练20次,acc 为0.9221,看起来不错,训练次数epochs 为多少合适呢?下面我么画出epochs和acc的关系曲线

plt.plot(history.epoch, history.history.get('acc'))
plt.xlabel('epochs')
plt.ylabel('acc')

从图可以看出第2以后曲线明显平缓,训练次数的增加对准确率的增加贡献不大了。

下面用测试集对训练好的模型进行测试

model.evaluate(test_image,test_label)

10000/10000 [==============================] - 0s 31us/sample - loss: 0.3537 - acc: 0.8840

准确率acc为 0.8840,效果貌似还可以。那么有没有办法进一步提高acc呢?

我们可以从网络的深度和广度两个方面进行探讨

网络深度:我们直观的认为层数越多,神经网络的拟合能力越强

网络广度:我们也直观的认为每层的神经元个数越多,拟合能力越强

真是这样吗?我们就在上面模型的基础上从以上两个方面进行测试。

下面我们将原来隐层64个神经元增加到128个,神经元个数增加,epoch要不要也增加呢?我们先不增加,然后训练画出epoch和acc的关系图再作考察。

model = tf.keras.Sequential()
#添加Flatten层将(28,28)的数据变成[28*28]
model.add(tf.keras.layers.Flatten(input_shape=(28,28)))
model.add(tf.keras.layers.Dense(128,activation='relu'))
model.add(tf.keras.layers.Dense(10,activation='softmax'))
model.compile(optimizer = 'adam',
              loss = 'sparse_categorical_crossentropy',
              metrics=['acc']
)
history = model.fit(train_image,train_lable,epochs=20)

Train on 60000 samples Epoch 1/20 60000/60000 [==============================] - 3s 46us/sample - loss: 0.4987 - acc: 0.8253 Epoch 2/20 60000/60000 [==============================] - 2s 39us/sample - loss: 0.3771 - acc: 0.8634 Epoch 3/20 60000/60000 [==============================] - 2s 41us/sample - loss: 0.3381 - acc: 0.8767 Epoch 4/20 60000/60000 [==============================] - 2s 41us/sample - loss: 0.3147 - acc: 0.8847 Epoch 5/20 60000/60000 [==============================] - 2s 41us/sample - loss: 0.2964 - acc: 0.8910 Epoch 6/20 60000/60000 [==============================] - 3s 42us/sample - loss: 0.2822 - acc: 0.8956 Epoch 7/20 60000/60000 [==============================] - 3s 43us/sample - loss: 0.2678 - acc: 0.9017 Epoch 8/20 60000/60000 [==============================] - 3s 42us/sample - loss: 0.2578 - acc: 0.9039 Epoch 9/20 60000/60000 [==============================] - 3s 42us/sample - loss: 0.2483 - acc: 0.9060 Epoch 10/20 60000/60000 [==============================] - 3s 42us/sample - loss: 0.2391 - acc: 0.9110 Epoch 11/20 60000/60000 [==============================] - 3s 42us/sample - loss: 0.2301 - acc: 0.9136 Epoch 12/20 60000/60000 [==============================] - 3s 42us/sample - loss: 0.2227 - acc: 0.9163 Epoch 13/20 60000/60000 [==============================] - 3s 43us/sample - loss: 0.2159 - acc: 0.9186 Epoch 14/20 60000/60000 [==============================] - 3s 43us/sample - loss: 0.2092 - acc: 0.9202 Epoch 15/20 60000/60000 [==============================] - 3s 42us/sample - loss: 0.2022 - acc: 0.9235 Epoch 16/20 60000/60000 [==============================] - 3s 42us/sample - loss: 0.1975 - acc: 0.9247 Epoch 17/20 60000/60000 [==============================] - 3s 42us/sample - loss: 0.1919 - acc: 0.9276 Epoch 18/20 60000/60000 [==============================] - 3s 42us/sample - loss: 0.1876 - acc: 0.9289 Epoch 19/20 60000/60000 [==============================] - 3s 43us/sample - loss: 0.1832 - acc: 0.9317 Epoch 20/20 60000/60000 [==============================] - 3s 44us/sample - loss: 0.1756 - acc: 0.9342

准确率提高了一点点,再看看在测试集上的表现

model.evaluate(test_image,test_label)

10000/10000 [==============================] - 0s 36us/sample - loss: 0.3478 - acc: 0.8912

也提高了。再看看epoch和acc的关系图

末端比第一个model要陡峭,通过增加训练次数应该还可以得到更好的结果,在此就不再测试了。下面在第一个model的基础上增加隐藏层,增加1个吧,epoch还是20

model = tf.keras.Sequential()
#添加Flatten层将(28,28)的数据变成[28*28]
model.add(tf.keras.layers.Flatten(input_shape=(28,28)))
model.add(tf.keras.layers.Dense(64,activation='relu'))
model.add(tf.keras.layers.Dense(64,activation='relu'))
model.add(tf.keras.layers.Dense(10,activation='softmax'))
model.compile(optimizer = 'adam',
              loss = 'sparse_categorical_crossentropy',
              metrics=['acc']
)
history = model.fit(train_image,train_lable,epochs=20)

Train on 60000 samples Epoch 1/20 60000/60000 [==============================] - 3s 48us/sample - loss: 0.5041 - acc: 0.8210 Epoch 2/20 60000/60000 [==============================] - 3s 45us/sample - loss: 0.3776 - acc: 0.8634 Epoch 3/20 60000/60000 [==============================] - 3s 45us/sample - loss: 0.3390 - acc: 0.8764 Epoch 4/20 60000/60000 [==============================] - 3s 43us/sample - loss: 0.3177 - acc: 0.8822 Epoch 5/20 60000/60000 [==============================] - 2s 42us/sample - loss: 0.3005 - acc: 0.8884 Epoch 6/20 60000/60000 [==============================] - 2s 40us/sample - loss: 0.2869 - acc: 0.8922 Epoch 7/20 60000/60000 [==============================] - 2s 41us/sample - loss: 0.2756 - acc: 0.8964 Epoch 8/20 60000/60000 [==============================] - 2s 40us/sample - loss: 0.2667 - acc: 0.9000 Epoch 9/20 60000/60000 [==============================] - 2s 39us/sample - loss: 0.2559 - acc: 0.9029 Epoch 10/20 60000/60000 [==============================] - 2s 41us/sample - loss: 0.2503 - acc: 0.9052 Epoch 11/20 60000/60000 [==============================] - 2s 41us/sample - loss: 0.2410 - acc: 0.9083 Epoch 12/20 60000/60000 [==============================] - 2s 41us/sample - loss: 0.2338 - acc: 0.9111 Epoch 13/20 60000/60000 [==============================] - 3s 42us/sample - loss: 0.2276 - acc: 0.9136 Epoch 14/20 60000/60000 [==============================] - 3s 42us/sample - loss: 0.2232 - acc: 0.9151 Epoch 15/20 60000/60000 [==============================] - 3s 43us/sample - loss: 0.2174 - acc: 0.9169 Epoch 16/20 60000/60000 [==============================] - 3s 42us/sample - loss: 0.2128 - acc: 0.9191 Epoch 17/20 60000/60000 [==============================] - 3s 42us/sample - loss: 0.2073 - acc: 0.9214 Epoch 18/20 60000/60000 [==============================] - 3s 43us/sample - loss: 0.2040 - acc: 0.9220 Epoch 19/20 60000/60000 [==============================] - 2s 39us/sample - loss: 0.1994 - acc: 0.9244 Epoch 20/20 60000/60000 [==============================] - 2s 40us/sample - loss: 0.1963 - acc: 0.9256

跟增加隐层之前好像差不多,下面看看测试集上的表现

model.evaluate(test_image,test_label)

10000/10000 [==============================] - 0s 36us/sample - loss: 0.3573 - acc: 0.8872

也差不多,再看看epoch和acc曲线

效果不明显啊,增加隐藏层和每层的神经元

model = tf.keras.Sequential()
#添加Flatten层将(28,28)的数据变成[28*28]
model.add(tf.keras.layers.Flatten(input_shape=(28,28)))
model.add(tf.keras.layers.Dense(128,activation='relu'))
model.add(tf.keras.layers.Dense(128,activation='relu'))
model.add(tf.keras.layers.Dense(128,activation='relu'))
model.add(tf.keras.layers.Dense(128,activation='relu'))
model.add(tf.keras.layers.Dense(128,activation='relu'))
model.add(tf.keras.layers.Dense(128,activation='relu'))
model.add(tf.keras.layers.Dense(10,activation='softmax'))
model.compile(optimizer = 'adam',
              loss = 'sparse_categorical_crossentropy',
              metrics=['acc']
)
history = model.fit(train_image,train_lable,epochs=20)

Train on 60000 samples Epoch 1/20 60000/60000 [==============================] - 4s 67us/sample - loss: 0.5205 - acc: 0.8120 Epoch 2/20 60000/60000 [==============================] - 3s 57us/sample - loss: 0.3879 - acc: 0.8591 Epoch 3/20 60000/60000 [==============================] - 4s 58us/sample - loss: 0.3504 - acc: 0.8723 Epoch 4/20 60000/60000 [==============================] - 3s 57us/sample - loss: 0.3289 - acc: 0.8813 Epoch 5/20 60000/60000 [==============================] - 3s 55us/sample - loss: 0.3080 - acc: 0.8867 Epoch 6/20 60000/60000 [==============================] - 3s 58us/sample - loss: 0.2968 - acc: 0.8923 Epoch 7/20 60000/60000 [==============================] - 3s 58us/sample - loss: 0.2862 - acc: 0.8937 Epoch 8/20 60000/60000 [==============================] - 3s 55us/sample - loss: 0.2738 - acc: 0.8988 Epoch 9/20 60000/60000 [==============================] - 3s 55us/sample - loss: 0.2655 - acc: 0.9028 Epoch 10/20 60000/60000 [==============================] - 3s 56us/sample - loss: 0.2580 - acc: 0.9048 Epoch 11/20 60000/60000 [==============================] - 3s 58us/sample - loss: 0.2507 - acc: 0.9080 Epoch 12/20 60000/60000 [==============================] - 4s 63us/sample - loss: 0.2442 - acc: 0.9088 Epoch 13/20 60000/60000 [==============================] - 4s 59us/sample - loss: 0.2357 - acc: 0.9127 Epoch 14/20 60000/60000 [==============================] - 3s 57us/sample - loss: 0.2318 - acc: 0.9147 Epoch 15/20 60000/60000 [==============================] - 3s 58us/sample - loss: 0.2245 - acc: 0.9154 Epoch 16/20 60000/60000 [==============================] - 4s 59us/sample - loss: 0.2198 - acc: 0.9183 Epoch 17/20 60000/60000 [==============================] - 3s 58us/sample - loss: 0.2151 - acc: 0.9193 Epoch 18/20 60000/60000 [==============================] - 3s 58us/sample - loss: 0.2108 - acc: 0.9202 Epoch 19/20 60000/60000 [==============================] - 4s 59us/sample - loss: 0.2097 - acc: 0.9222 Epoch 20/20 60000/60000 [==============================] - 3s 58us/sample - loss: 0.2016 - acc: 0.9250

acc没有提升,再看测试集上的表现

model.evaluate(test_image,test_label)

10000/10000 [==============================] - 0s 40us/sample - loss: 0.3670 - acc: 0.8801

效果也不明显呀,今天到此为止吧。下次再探。

发布了23 篇原创文章 · 获赞 1 · 访问量 3347

猜你喜欢

转载自blog.csdn.net/ABCDABCD321123/article/details/104734947
今日推荐