【深度学习框架Keras】一个二分类的例子

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/bqw18744018044/article/details/82598131

一、这个IMDB数据集包含了50000条电影评论,其中25000条用于训练,另外25000条用于测试。其label只包含0或1,其中0表示负面评价,1表示正面评价

from keras.datasets import imdb
(train_data,train_labels),(test_data,test_labels) = imdb.load_data(num_words=8000) #前8000个单词(每局评论至多包含8000个单词)

二、数据集的相关信息

print('shape of train data is ',train_data.shape)
print('shape of train labels is ',train_labels.shape)
print('an example of train data is ',train_data[5])
shape of train data is  (25000,)
shape of train labels is  (25000,)
an example of train data is  [1, 778, 128, 74, 12, 630, 163, 15, 4, 1766, 7982, 1051, 2, 32, 85, 156, 45, 40, 148, 139, 121, 664, 665, 10, 10, 1361, 173, 4, 749, 2, 16, 3804, 8, 4, 226, 65, 12, 43, 127, 24, 2, 10, 10]

三、处理数据集

import numpy as np
# 神经网络的输入必须是tensor而不是list,所以需要将数据集处理为25000*8000
def vectorize_sequences(sequences,dimension=8000):
    # 生成25000*8000的二维Numpy数组
    results = np.zeros((len(sequences),dimension))
    # one-hot编码
    for i,sequence in enumerate(sequences):
        results[i,sequence] = 1.
    return results
x_train = vectorize_sequences(train_data)
x_test = vectorize_sequences(test_data)
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')

四、设计网络结构

  • 对于一个输入为向量的二分类问题,使用relu做为激活函数的全连接网络可以很好的解决这个问题
  • 输出层使用sigmoid的目的是使网络输出的是介于0到1的一个概率值
  • 对于二分类问题,通常使用binary_crossentropy
from keras import models
from keras import layers
def build_model():
    model = models.Sequential()
    model.add(layers.Dense(16,activation='relu',input_shape=(8000,)))
    model.add(layers.Dense(16,activation='relu'))
    model.add(layers.Dense(1,activation='sigmoid'))
    model.compile(optimizer='rmsprop',# 还可以通过optimizer = optimizers.RMSprop(lr=0.001)来为优化器指定参数
                  loss='binary_crossentropy', # 等价于loss = losses.binary_crossentropy
                  metrics=['accuracy']) # 等价于metrics = [metircs.binary_accuracy]
    return model
model = build_model()

五、划分验证集用于选择超参数epochs

x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]

六、训练模型

history = model.fit(partial_x_train,
                    partial_y_train,
                    epochs=20, # 在全数据集上迭代20次
                    batch_size=512, # 每个batch的大小为512
                    validation_data=(x_val,y_val))
Train on 15000 samples, validate on 10000 samples
Epoch 1/20
15000/15000 [==============================] - 2s 100us/step - loss: 0.5112 - acc: 0.7813 - val_loss: 0.3790 - val_acc: 0.8654
Epoch 2/20
15000/15000 [==============================] - 1s 79us/step - loss: 0.3057 - acc: 0.9003 - val_loss: 0.3029 - val_acc: 0.8884
Epoch 3/20
15000/15000 [==============================] - 1s 77us/step - loss: 0.2272 - acc: 0.9245 - val_loss: 0.3130 - val_acc: 0.8718
Epoch 4/20
15000/15000 [==============================] - 1s 78us/step - loss: 0.1874 - acc: 0.9365 - val_loss: 0.2842 - val_acc: 0.8859
Epoch 5/20
15000/15000 [==============================] - 1s 78us/step - loss: 0.1571 - acc: 0.9467 - val_loss: 0.2845 - val_acc: 0.8868
Epoch 6/20
15000/15000 [==============================] - 1s 77us/step - loss: 0.1316 - acc: 0.9587 - val_loss: 0.3343 - val_acc: 0.8692
Epoch 7/20
15000/15000 [==============================] - 1s 78us/step - loss: 0.1159 - acc: 0.9625 - val_loss: 0.3089 - val_acc: 0.8851
Epoch 8/20
15000/15000 [==============================] - 1s 79us/step - loss: 0.0993 - acc: 0.9685 - val_loss: 0.3608 - val_acc: 0.8703
Epoch 9/20
15000/15000 [==============================] - 1s 80us/step - loss: 0.0850 - acc: 0.9750 - val_loss: 0.3473 - val_acc: 0.8788
Epoch 10/20
15000/15000 [==============================] - 1s 80us/step - loss: 0.0745 - acc: 0.9781 - val_loss: 0.3719 - val_acc: 0.8757
Epoch 11/20
15000/15000 [==============================] - 1s 81us/step - loss: 0.0658 - acc: 0.9811 - val_loss: 0.4018 - val_acc: 0.8736
Epoch 12/20
15000/15000 [==============================] - 1s 79us/step - loss: 0.0566 - acc: 0.9837 - val_loss: 0.4373 - val_acc: 0.8679
Epoch 13/20
15000/15000 [==============================] - 1s 77us/step - loss: 0.0483 - acc: 0.9861 - val_loss: 0.4484 - val_acc: 0.8703
Epoch 14/20
15000/15000 [==============================] - 1s 79us/step - loss: 0.0406 - acc: 0.9897 - val_loss: 0.4728 - val_acc: 0.8677
Epoch 15/20
15000/15000 [==============================] - 1s 78us/step - loss: 0.0355 - acc: 0.9913 - val_loss: 0.4968 - val_acc: 0.8693
Epoch 16/20
15000/15000 [==============================] - 1s 78us/step - loss: 0.0301 - acc: 0.9934 - val_loss: 0.5280 - val_acc: 0.8676
Epoch 17/20
15000/15000 [==============================] - 1s 78us/step - loss: 0.0277 - acc: 0.9933 - val_loss: 0.5561 - val_acc: 0.8654
Epoch 18/20
15000/15000 [==============================] - 1s 79us/step - loss: 0.0222 - acc: 0.9949 - val_loss: 0.5822 - val_acc: 0.8654
Epoch 19/20
15000/15000 [==============================] - 1s 79us/step - loss: 0.0175 - acc: 0.9971 - val_loss: 0.6369 - val_acc: 0.8581
Epoch 20/20
15000/15000 [==============================] - 1s 78us/step - loss: 0.0152 - acc: 0.9977 - val_loss: 0.6359 - val_acc: 0.8636

history对象保存了在模型训练过程的相关信息

history_dict = history.history
history_dict.keys()
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])

七、绘制loss和accuracy

x轴为epochs,y轴为loss

import matplotlib.pyplot as plt
%matplotlib inline
loss_values = history_dict['loss']
val_loss_values = history_dict['val_loss']
epochs = range(1,len(loss_values)+1)
plt.plot(epochs,loss_values,'bo',label='Training loss')
plt.plot(epochs,val_loss_values,'b',label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()

这里写图片描述

x轴为epochs,y轴为accuracy

plt.clf()
acc_values = history_dict['acc']
val_acc_values = history_dict['val_acc']

plt.plot(epochs,acc_values,'bo',label='Training acc')
plt.plot(epochs,val_acc_values,'b',label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Acc')
plt.legend()

这里写图片描述

八、选择合适的超参数,然后在所有训练集上重新训练模型

model.fit(x_train,
         y_train,
         epochs=4, # 由loss图发现在epochs=4的位置上validation loss最低
         batch_size=512)
Epoch 1/4
25000/25000 [==============================] - 1s 55us/step - loss: 0.2172 - acc: 0.9436
Epoch 2/4
25000/25000 [==============================] - 1s 56us/step - loss: 0.1509 - acc: 0.9552
Epoch 3/4
25000/25000 [==============================] - 1s 54us/step - loss: 0.1221 - acc: 0.9632
Epoch 4/4
25000/25000 [==============================] - 1s 54us/step - loss: 0.1012 - acc: 0.9678
<keras.callbacks.History at 0x7f68945847b8>

评估模型在测试集上的loss和accuracy

results = model.evaluate(x_test,y_test)
results
[0.49199876737356185, 0.8586]

猜你喜欢

转载自blog.csdn.net/bqw18744018044/article/details/82598131
今日推荐