[TF2.0-CNN]tensorflow 2.0 helloworld(用卷积神经网络优化mnist)

Fensorflow 2.0 把Keras变成默认高级API后,易用性极好。 

【例1】手写体数字识别mnist

import tensorflow as tf

class myCallback(tf.keras.callbacks.Callback):
  def on_epoch_end(self, epoch, logs={}):
    if(logs.get('accuracy')>0.99):
      print("\nReached 99% accuracy so cancelling training!")
      self.model.stop_training = True

mnist = tf.keras.datasets.mnist

(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

callbacks = myCallback()

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(512, activation=tf.nn.relu),
  tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(x_train, y_train, epochs=10, callbacks=[callbacks])

【解析】

TF集成了MNIST数据集,用mnist = tf.keras.datasets.mnist引用,然后用mnist.load_data()导出数据集。

模型创建极简单:

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(512, activation=tf.nn.relu),
  tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])

为模型指定参数(优化器、损失函数等):
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

开始训练:

model.fit(x_train, y_train, epochs=10, callbacks=[callbacks])

其中epochs=10表示要训练10轮,callbacks是在每轮训练后要调用的函数。这里我们的逻辑是:如果准确率超过99%则提前结束训练。

 def on_epoch_end(self, epoch, logs={}):
    if(logs.get('accuracy')>0.99):
      print("\nReached 99% accuracy so cancelling training!")
      self.model.stop_training = True

【例2】用卷积神经网络优化MNIST

import tensorflow as tf


class myCallback(tf.keras.callbacks.Callback):
    def on_epoch_end(self, epoch, logs={}):
        if (logs.get('accuracy') > 0.998):
            print("\nReached 99.8% accuracy so cancelling training!")
            self.model.stop_training = True


callbacks = myCallback()

mnist = tf.keras.datasets.mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images, test_images = training_images.reshape(60000, 28, 28, 1) / 255.0, test_images.reshape(10000, 28, 28, 1) / 255.0

model = tf.keras.models.Sequential([
    tf.keras.layers.Conv2D(64, (3, 3), activation='relu', input_shape=(28, 28, 1)),
    tf.keras.layers.MaxPooling2D(2, 2),
    tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
    tf.keras.layers.MaxPooling2D(2, 2),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(512, activation=tf.nn.relu),
    tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# model fitting
history = model.fit(
    training_images,
    training_labels,
    epochs=20,
    callbacks=[callbacks]
)

【解析】

与例1的区别:

1. 加入了卷积层和池化层(1层卷积+1层池化为一组,共加了两组):

    tf.keras.layers.Conv2D(64, (3, 3), activation='relu', input_shape=(28, 28, 1)),
    tf.keras.layers.MaxPooling2D(2, 2),
    tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
    tf.keras.layers.MaxPooling2D(2, 2),

2. 将提前结束训练的阈值从99%上调到99.8%

【输出】

Epoch 1/20
60000/60000 [==============================] - 24s 398us/sample - loss: 0.1073 - acc: 0.9667
Epoch 2/20
60000/60000 [==============================] - 23s 382us/sample - loss: 0.0369 - acc: 0.9884
Epoch 3/20
60000/60000 [==============================] - 21s 355us/sample - loss: 0.0243 - acc: 0.9921
Epoch 4/20
60000/60000 [==============================] - 21s 358us/sample - loss: 0.0174 - acc: 0.9945
Epoch 5/20
60000/60000 [==============================] - 22s 360us/sample - loss: 0.0141 - acc: 0.9958
Epoch 6/20
60000/60000 [==============================] - 22s 358us/sample - loss: 0.0105 - acc: 0.9963
Epoch 7/20
60000/60000 [==============================] - 22s 361us/sample - loss: 0.0094 - acc: 0.9969
Epoch 8/20
60000/60000 [==============================] - 21s 355us/sample - loss: 0.0079 - acc: 0.9974
Epoch 9/20
60000/60000 [==============================] - 21s 357us/sample - loss: 0.0069 - acc: 0.9976
Epoch 10/20
59872/60000 [============================>.] - ETA: 0s - loss: 0.0056 - acc: 0.9981
Reached 99.8% accuracy so cancelling training!
60000/60000 [==============================] - 21s 358us/sample - loss: 0.0056 - acc: 0.9981

重要信息:

1. 每训练一轮耗时21-24秒

2. 训练到第10轮的时候,准确率达到99.8%而提前结束训练

经过卷积神经网络优化的模型能把MNIST识别准确率轻松达到99.8%,效果还是不错的。

注意:如果你不小心把这段代码放在TF1.X的环境上跑,并发现KeyError报错,可尝试把代码中的“accuracy”改成“acc"

发布了90 篇原创文章 · 获赞 24 · 访问量 10万+

猜你喜欢

转载自blog.csdn.net/menghaocheng/article/details/102742153
今日推荐