《Python深度学习》实验之递归神经网络

递归神经网络

实验环境

keras 2.1.5
tensorflow 1.4.0

实验工具

Jupyter Notebook

实验一:词嵌入的使用

词嵌入

将矢量与单词相关联的另一种流行且强大的方式,将更多信息包装到更少的维度中。

实验目的

实现词嵌入。

数据集

IMDB数据集

实现方式

1.与关心的主要任务一起学习单词嵌入。
2.加载到模型文字嵌入中。

实验过程

1.学习用嵌入层进行词嵌入
  将密集向量与单词相关联的最简单方法是随机选择向量,词向量之间的几何关系应该反映这些词之间的语义关系。
嵌入层:
  将整数索引(代表特定词)映射到密集向量的字典。

from keras.layers import Embedding
#两个参数:标记数和嵌入的维数。
embedding_layer = Embedding(1000, 64) 

应用于IMDB电影评论情绪预测任务:
  该层返回一个形状为3D的浮点张量,这种3D张量可以通过RNN层或1D卷积层来处理。
  嵌入层权重最初是随机的,在训练期间,这些单词向量将通过反向传播逐渐调整,将空间构建为下游模型可以利用的内容。一旦受过训练,您的嵌入空间将显示出许多结构——专门针对您要解决的问题的一种结构。

from keras.datasets import imdb
from keras import preprocessing

max_features = 10000
maxlen = 20
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
x_train = preprocessing.sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = preprocessing.sequence.pad_sequences(x_test, maxlen=maxlen)

from keras.models import Sequential
from keras.layers import Flatten, Dense

model = Sequential()
model.add(Embedding(10000, 8, input_length=maxlen))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])
model.summary()
history = model.fit(x_train, y_train,
                    epochs=10,
                    batch_size=32,
                    validation_split=0.2)
'''
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_5 (Embedding)      (None, 20, 8)             80000     
_________________________________________________________________
flatten_2 (Flatten)          (None, 160)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 161       
=================================================================
Total params: 80,161
Trainable params: 80,161
Non-trainable params: 0
_________________________________________________________________
Train on 20000 samples, validate on 5000 samples
Epoch 1/10
20000/20000 [==============================] - 2s 76us/step - loss: 0.6759 - acc: 0.6044 - val_loss: 0.6398 - val_acc: 0.6810
Epoch 2/10
20000/20000 [==============================] - 1s 47us/step - loss: 0.5657 - acc: 0.7428 - val_loss: 0.5467 - val_acc: 0.7206
Epoch 3/10
20000/20000 [==============================] - 1s 44us/step - loss: 0.4752 - acc: 0.7808 - val_loss: 0.5113 - val_acc: 0.7384
Epoch 4/10
20000/20000 [==============================] - 1s 52us/step - loss: 0.4263 - acc: 0.8079 - val_loss: 0.5008 - val_acc: 0.7454
Epoch 5/10
20000/20000 [==============================] - 1s 60us/step - loss: 0.3930 - acc: 0.8257 - val_loss: 0.4981 - val_acc: 0.7540
Epoch 6/10
20000/20000 [==============================] - 1s 63us/step - loss: 0.3668 - acc: 0.8394 - val_loss: 0.5013 - val_acc: 0.7532
Epoch 7/10
20000/20000 [==============================] - 1s 62us/step - loss: 0.3435 - acc: 0.8534 - val_loss: 0.5051 - val_acc: 0.7518
Epoch 8/10
20000/20000 [==============================] - 1s 49us/step - loss: 0.3223 - acc: 0.8658 - val_loss: 0.5132 - val_acc: 0.7486
Epoch 9/10
20000/20000 [==============================] - 1s 49us/step - loss: 0.3022 - acc: 0.8765 - val_loss: 0.5213 - val_acc: 0.7494
Epoch 10/10
20000/20000 [==============================] - 1s 50us/step - loss: 0.2839 - acc: 0.8860 - val_loss: 0.5302 - val_acc: 0.7466
'''
#由于只查看每篇评论的前20个单词,准确率已经可以了。

2.使用预先训练的词嵌入
  从一个已经高度结构化的提前计算好的嵌入空间里加载嵌入向量并去掉有用的变量,这样能够获取语言结构中天然的部分。

综合起来:从原始文本到文字嵌入

将句子嵌入到矢量序列中将它们扁平化并在顶层训练一层全连接层。
1.将作下载的IMDB数据作为原始文本:

http://ai.stanford.edu/~amaas/data/sentiment/

2.预处理:
  将各个培训评论收集到一个字符串列表中,每个评论一个字符串,并且还将这些评论标签(正面/负面)收集到labels列表中。
  注意:修改路径和字符编码。

import os
from keras.datasets import imdb
imdb_dir = 'C:/Users/Administrator/Desktop/deeplearning/03.递归神经网络/aclImdb'
train_dir = os.path.join(imdb_dir, 'train')

labels = []
texts = []

for label_type in ['neg', 'pos']:
    dir_name = os.path.join(train_dir, label_type)
    for fname in os.listdir(dir_name):
        if fname[-4:] == '.txt':
            f = open(os.path.join(dir_name,fname),encoding='utf-8')
            texts.append(f.read())
            f.close()
            if label_type == 'neg':
                labels.append(0)
            else:
                labels.append(1)

3.标记数据:
  将收集的文本进行矢量化,将它们分为训练样本和测试样本。
  因为预先训练的词嵌入对于只有很少的训练数据可用的问题特别有用,所以添加以下转折点:我们将训练数据限制在前200样本。

from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
import numpy as np

maxlen = 100  
training_samples = 200  
validation_samples = 10000  
max_words = 10000  

tokenizer = Tokenizer(num_words=max_words)
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)

word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))

data = pad_sequences(sequences, maxlen=maxlen)

labels = np.asarray(labels)
print('Shape of data tensor:', data.shape)
print('Shape of label tensor:', labels.shape)

indices = np.arange(data.shape[0])
np.random.shuffle(indices)
data = data[indices]
labels = labels[indices]

x_train = data[:training_samples]
y_train = labels[:training_samples]
x_val = data[training_samples: training_samples + validation_samples]
y_val = labels[training_samples: training_samples + validation_samples]
'''
Found 88582 unique tokens.
Shape of data tensor: (25000, 100)
Shape of label tensor: (25000,)
'''

4.下载GloVe的词嵌入:

https://nlp.stanford.edu/projects/glove/

5.预处理嵌入:
  我们解析解压缩后的文件来构建一个索引映射词到它们的向量表示。
  注意:路径修改和字符编码。

glove_dir = 'C:/Users/Administrator/Desktop/deeplearning/03.递归神经网络/glove.6B'

embeddings_index = {}
f = open(os.path.join(glove_dir, 'glove.6B.100d.txt'),encoding='utf-8')
for line in f:
    values = line.split()
    word = values[0]
    coefs = np.asarray(values[1:], dtype='float32')
    embeddings_index[word] = coefs
f.close()

print('Found %s word vectors.' % len(embeddings_index))
'''
Found 400000 word vectors.
'''

构建一个嵌入矩阵:

embedding_dim = 100

embedding_matrix = np.zeros((max_words, embedding_dim))
for word, i in word_index.items():
    embedding_vector = embeddings_index.get(word)
    if i < max_words:
        if embedding_vector is not None:
            embedding_matrix[i] = embedding_vector

6.定义模型:

from keras.models import Sequential
from keras.layers import Embedding, Flatten, Dense

model = Sequential()
model.add(Embedding(max_words, embedding_dim, input_length=maxlen))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
'''
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_6 (Embedding)      (None, 100, 100)          1000000   
_________________________________________________________________
flatten_3 (Flatten)          (None, 10000)             0         
_________________________________________________________________
dense_3 (Dense)              (None, 32)                320032    
_________________________________________________________________
dense_4 (Dense)              (None, 1)                 33        
=================================================================
Total params: 1,320,065
Trainable params: 1,320,065
Non-trainable params: 0
_________________________________________________________________
'''

7.将GloVe嵌入加载到模型中:
  该嵌入层具有单个权重矩阵:2D浮点矩阵。
  将我们准备好的GloVe矩阵加载到嵌入图层中,并且冻结嵌入层。

model.layers[0].set_weights([embedding_matrix])
model.layers[0].trainable = False

8.训练:

model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['acc'])
history = model.fit(x_train, y_train,
                    epochs=10,
                    batch_size=32,
                    validation_data=(x_val, y_val))
model.save_weights('pre_trained_glove_model.h5')
'''
Train on 200 samples, validate on 10000 samples
Epoch 1/10
200/200 [==============================] - 1s 6ms/step - loss: 1.6337 - acc: 0.5250 - val_loss: 0.7130 - val_acc: 0.5100
Epoch 2/10
200/200 [==============================] - 1s 5ms/step - loss: 0.7565 - acc: 0.5800 - val_loss: 0.6910 - val_acc: 0.5418
Epoch 3/10
200/200 [==============================] - 1s 5ms/step - loss: 0.5956 - acc: 0.6950 - val_loss: 1.1205 - val_acc: 0.4936
Epoch 4/10
200/200 [==============================] - 1s 7ms/step - loss: 0.5335 - acc: 0.7350 - val_loss: 0.7134 - val_acc: 0.5362
Epoch 5/10
200/200 [==============================] - 1s 5ms/step - loss: 0.4713 - acc: 0.8100 - val_loss: 0.7177 - val_acc: 0.5589
Epoch 6/10
200/200 [==============================] - 1s 5ms/step - loss: 0.1448 - acc: 0.9800 - val_loss: 1.3373 - val_acc: 0.4952
Epoch 7/10
200/200 [==============================] - 1s 5ms/step - loss: 0.2545 - acc: 0.8800 - val_loss: 1.3110 - val_acc: 0.4960
Epoch 8/10
200/200 [==============================] - 1s 7ms/step - loss: 0.1102 - acc: 0.9800 - val_loss: 0.8168 - val_acc: 0.5558
Epoch 9/10
200/200 [==============================] - 1s 5ms/step - loss: 0.0760 - acc: 0.9800 - val_loss: 1.5204 - val_acc: 0.5115
Epoch 10/10
200/200 [==============================] - 1s 5ms/step - loss: 0.0680 - acc: 0.9850 - val_loss: 0.7458 - val_acc: 0.5759
'''

绘图:

import matplotlib.pyplot as plt

acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(1, len(acc) + 1)

plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()

plt.figure()

plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()

plt.show()

在这里插入图片描述
在这里插入图片描述
9.调整:
  模型出现过拟合,且由于训练样本少,验证集准确度具有很高的方差。
  训练相同的模型,而不加载预先训练的词嵌入并且不冻结嵌入层。

from keras.models import Sequential
from keras.layers import Embedding, Flatten, Dense

model = Sequential()
model.add(Embedding(max_words, embedding_dim, input_length=maxlen))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()

model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['acc'])
history = model.fit(x_train, y_train,
                    epochs=10,
                    batch_size=32,
                    validation_data=(x_val, y_val))
'''
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_7 (Embedding)      (None, 100, 100)          1000000   
_________________________________________________________________
flatten_4 (Flatten)          (None, 10000)             0         
_________________________________________________________________
dense_5 (Dense)              (None, 32)                320032    
_________________________________________________________________
dense_6 (Dense)              (None, 1)                 33        
=================================================================
Total params: 1,320,065
Trainable params: 1,320,065
Non-trainable params: 0
_________________________________________________________________
Train on 200 samples, validate on 10000 samples
Epoch 1/10
200/200 [==============================] - 2s 8ms/step - loss: 0.6951 - acc: 0.4350 - val_loss: 0.6950 - val_acc: 0.5167
Epoch 2/10
200/200 [==============================] - 1s 6ms/step - loss: 0.5028 - acc: 0.9800 - val_loss: 0.7054 - val_acc: 0.5069
Epoch 3/10
200/200 [==============================] - 1s 7ms/step - loss: 0.2897 - acc: 0.9850 - val_loss: 0.7012 - val_acc: 0.5189
Epoch 4/10
200/200 [==============================] - 1s 6ms/step - loss: 0.1182 - acc: 1.0000 - val_loss: 0.7165 - val_acc: 0.5156
Epoch 5/10
200/200 [==============================] - 1s 6ms/step - loss: 0.0523 - acc: 1.0000 - val_loss: 0.7150 - val_acc: 0.5288
Epoch 6/10
200/200 [==============================] - 1s 6ms/step - loss: 0.0261 - acc: 1.0000 - val_loss: 0.7253 - val_acc: 0.5262
Epoch 7/10
200/200 [==============================] - 1s 6ms/step - loss: 0.0141 - acc: 1.0000 - val_loss: 0.7211 - val_acc: 0.5384
Epoch 8/10
200/200 [==============================] - 1s 6ms/step - loss: 0.0082 - acc: 1.0000 - val_loss: 0.7393 - val_acc: 0.5268
Epoch 9/10
200/200 [==============================] - 1s 7ms/step - loss: 0.0049 - acc: 1.0000 - val_loss: 0.7283 - val_acc: 0.5393
Epoch 10/10
200/200 [==============================] - 1s 7ms/step - loss: 0.0030 - acc: 1.0000 - val_loss: 0.7474 - val_acc: 0.5316
'''

绘图:

acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(1, len(acc) + 1)

plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()

plt.figure()

plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()

plt.show()

在这里插入图片描述
在这里插入图片描述
10.评估模型:
标记测试数据:

test_dir = os.path.join(imdb_dir, 'test')

labels = []
texts = []

for label_type in ['neg', 'pos']:
    dir_name = os.path.join(test_dir, label_type)
    for fname in sorted(os.listdir(dir_name)):
        if fname[-4:] == '.txt':
            f = open(os.path.join(dir_name, fname),encoding='utf-8')
            texts.append(f.read())
            f.close()
            if label_type == 'neg':
                labels.append(0)
            else:
                labels.append(1)

sequences = tokenizer.texts_to_sequences(texts)
x_test = pad_sequences(sequences, maxlen=maxlen)
y_test = np.asarray(labels)

加载并评估:
  由于训练样本少,准确性已经很高了。

model.load_weights('pre_trained_glove_model.h5')
model.evaluate(x_test, y_test)
'''
25000/25000 [==============================] - 3s 120us/step
[0.74487344819068912, 0.57604]
'''

实验二:Keras中的一阶递归层

实验目的

学习Keras中的一阶递归层。
  是将处理过程中的Numpy换为Keras的网络层: SimpleRNN 神经网路层。
  只有一个小的差异:simpleRNN过程批次序列。

思路

训练两个模型,一个来生成某种给定随机噪声作为输入的输出例子G,一个从实际例子中识别生成的模型示例A。然后,通过训练A成为一种有效的鉴别器,我们可以将G和A叠加成我们的GAN,冻结网络对抗性部分的权重,并训练生成的网络权重,将随机噪声输入推到对抗性半类的“真实”类输出。

工作过程:
在这里插入图片描述

实验过程

1.return_sequences构造函数:

from keras.layers import SimpleRNN

from keras.models import Sequential
from keras.layers import Embedding, SimpleRNN

model = Sequential()
model.add(Embedding(10000, 32))
model.add(SimpleRNN(32))
model.summary()
'''
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_1 (Embedding)      (None, None, 32)          320000    
_________________________________________________________________
simple_rnn_1 (SimpleRNN)     (None, 32)                2080      
=================================================================
Total params: 322,080
Trainable params: 322,080
Non-trainable params: 0
_________________________________________________________________
'''
model = Sequential()
model.add(Embedding(10000, 32))
model.add(SimpleRNN(32, return_sequences=True))
model.summary()
'''
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_2 (Embedding)      (None, None, 32)          320000    
_________________________________________________________________
simple_rnn_2 (SimpleRNN)     (None, None, 32)          2080      
=================================================================
Total params: 322,080
Trainable params: 322,080
Non-trainable params: 0
_________________________________________________________________
'''

为了增加网络的代表性,一个接一个地叠加几个递归层是很有用的。

model = Sequential()
model.add(Embedding(10000, 32))
model.add(SimpleRNN(32, return_sequences=True))
model.add(SimpleRNN(32, return_sequences=True))
model.add(SimpleRNN(32, return_sequences=True))
model.add(SimpleRNN(32))
model.summary()
'''
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
embedding_3 (Embedding)      (None, None, 32)          320000    
_________________________________________________________________
simple_rnn_3 (SimpleRNN)     (None, None, 32)          2080      
_________________________________________________________________
simple_rnn_4 (SimpleRNN)     (None, None, 32)          2080      
_________________________________________________________________
simple_rnn_5 (SimpleRNN)     (None, None, 32)          2080      
_________________________________________________________________
simple_rnn_6 (SimpleRNN)     (None, 32)                2080      
=================================================================
Total params: 328,320
Trainable params: 328,320
Non-trainable params: 0
_________________________________________________________________
'''

2.对数据进行预处理:

from keras.datasets import imdb
from keras.preprocessing import sequence

max_features = 10000 
maxlen = 500 
batch_size = 32

print('Loading data...')
(input_train, y_train), (input_test, y_test) = imdb.load_data(num_words=max_features)
print(len(input_train), 'train sequences')
print(len(input_test), 'test sequences')

print('Pad sequences (samples x time)')
input_train = sequence.pad_sequences(input_train, maxlen=maxlen)
input_test = sequence.pad_sequences(input_test, maxlen=maxlen)
print('input_train shape:', input_train.shape)
print('input_test shape:', input_test.shape)
'''
Loading data...
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
input_train shape: (25000, 500)
input_test shape: (25000, 500)
'''

3.训练一个包含嵌入层和简单RNN构建简单地递归神经网络:

from keras.layers import Dense

model = Sequential()
model.add(Embedding(max_features, 32))
model.add(SimpleRNN(32))
model.add(Dense(1, activation='sigmoid'))

model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])
history = model.fit(input_train, y_train,
                    epochs=10,
                    batch_size=128,
                    validation_split=0.2)
'''
Train on 20000 samples, validate on 5000 samples
Epoch 1/10
20000/20000 [==============================] - 24s 1ms/step - loss: 0.6418 - acc: 0.6144 - val_loss: 0.4725 - val_acc: 0.7934
Epoch 2/10
20000/20000 [==============================] - 17s 829us/step - loss: 0.4259 - acc: 0.8150 - val_loss: 0.4076 - val_acc: 0.8274
Epoch 3/10
20000/20000 [==============================] - 20s 990us/step - loss: 0.3028 - acc: 0.8801 - val_loss: 0.3612 - val_acc: 0.8458
Epoch 4/10
20000/20000 [==============================] - 19s 942us/step - loss: 0.2329 - acc: 0.9083 - val_loss: 0.3842 - val_acc: 0.8438
Epoch 5/10
20000/20000 [==============================] - 19s 937us/step - loss: 0.1717 - acc: 0.9366 - val_loss: 0.3924 - val_acc: 0.8556
Epoch 6/10
20000/20000 [==============================] - 20s 1ms/step - loss: 0.1288 - acc: 0.9547 - val_loss: 0.3990 - val_acc: 0.8502
Epoch 7/10
20000/20000 [==============================] - 19s 954us/step - loss: 0.0805 - acc: 0.9739 - val_loss: 0.4570 - val_acc: 0.8556
Epoch 8/10
20000/20000 [==============================] - 19s 968us/step - loss: 0.0520 - acc: 0.9838 - val_loss: 0.6217 - val_acc: 0.7936
Epoch 9/10
20000/20000 [==============================] - 20s 981us/step - loss: 0.0314 - acc: 0.9908 - val_loss: 0.6018 - val_acc: 0.8234
Epoch 10/10
20000/20000 [==============================] - 22s 1ms/step - loss: 0.0226 - acc: 0.9931 - val_loss: 0.6232 - val_acc: 0.8248
'''

4.结果:
  性能不太好,主要是因为simpleRNN不是很擅长处理长序列,如文本。

import matplotlib.pyplot as plt

acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(len(acc))

plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()

plt.figure()

plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()

plt.show()

在这里插入图片描述
在这里插入图片描述
5.Keras中混合LSTM举例:
  仅仅指定了LSTM网络层的维度,而其他参数都设置成Keras中的默认值。Keras的默认值较好,即使没有花费大量的时间调节参数,系统性能依然会比较好。

from keras.layers import LSTM

model = Sequential()
model.add(Embedding(max_features, 32))
model.add(LSTM(32))
model.add(Dense(1, activation='sigmoid'))

model.compile(optimizer='rmsprop',
              loss='binary_crossentropy',
              metrics=['acc'])
history = model.fit(input_train, y_train,
                    epochs=10,
                    batch_size=128,
                    validation_split=0.2)
'''
Train on 20000 samples, validate on 5000 samples
Epoch 1/10
20000/20000 [==============================] - 51s 3ms/step - loss: 0.5097 - acc: 0.7618 - val_loss: 0.5537 - val_acc: 0.7454
Epoch 2/10
20000/20000 [==============================] - 47s 2ms/step - loss: 0.2905 - acc: 0.8856 - val_loss: 0.3063 - val_acc: 0.8714
Epoch 3/10
20000/20000 [==============================] - 48s 2ms/step - loss: 0.2315 - acc: 0.9106 - val_loss: 0.2942 - val_acc: 0.8906
Epoch 4/10
20000/20000 [==============================] - 46s 2ms/step - loss: 0.1951 - acc: 0.9268 - val_loss: 0.5121 - val_acc: 0.8334
Epoch 5/10
20000/20000 [==============================] - 48s 2ms/step - loss: 0.1724 - acc: 0.9370 - val_loss: 0.3116 - val_acc: 0.8832
Epoch 6/10
20000/20000 [==============================] - 50s 3ms/step - loss: 0.1525 - acc: 0.9433 - val_loss: 0.3921 - val_acc: 0.8792
Epoch 7/10
20000/20000 [==============================] - 47s 2ms/step - loss: 0.1380 - acc: 0.9500 - val_loss: 0.5595 - val_acc: 0.8574
Epoch 8/10
20000/20000 [==============================] - 44s 2ms/step - loss: 0.1310 - acc: 0.9526 - val_loss: 0.3344 - val_acc: 0.8768
Epoch 9/10
20000/20000 [==============================] - 45s 2ms/step - loss: 0.1174 - acc: 0.9580 - val_loss: 0.3414 - val_acc: 0.8834
Epoch 10/10
20000/20000 [==============================] - 46s 2ms/step - loss: 0.1077 - acc: 0.9627 - val_loss: 0.3649 - val_acc: 0.8814
'''

结果:

acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(len(acc))

plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()

plt.figure()

plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()

plt.show()

在这里插入图片描述
在这里插入图片描述

发布了173 篇原创文章 · 获赞 6 · 访问量 4万+

猜你喜欢

转载自blog.csdn.net/shidonghang/article/details/103675060