Tensorflow学习笔记2----文本分类模型

版权声明:所有的博客都是个人笔记,交流可以留言。未经允许,谢绝转载。。。 https://blog.csdn.net/qq_35976351/article/details/87738904

词向量和Embedding Layer简介

先给出学习的资料地址:

总结一句话:词向量给所有词典中的单词一个长度为N的特征向量,通过这个特征向量,来表示词的分布情况。不但节省空间,而且可以表示词之间的相似关系

卷积层简介

总结一句话:在尽量减少信息损失的情况下,降维处理,简化计算。

代码简介

导入数据

imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)

可以认为imdb是一个给单词编码的库,每个句子都把单词替换成了整型数据

把数字转换成单词
这里展示了一个一般的方法,词典的键值与值的转化方式:

word_index = imdb.get_word_index()  # 单词是键值,数字是对应的编码
word_index = {k: (v + 3) for k, v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2
word_index["<UNUSED>"] = 3

reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])


def decode_word_review(text):
    return ' '.join([reverse_word_index.get(i, '?') for i in text])


print(decode_word_review(train_data[0]))

这一步在训练中没用,只是个展示。。。

对句子长度进行标准化
训练集和测试集的句子长度不相同,需要统一长度,才能分批次进行训练:

train_data = keras.preprocessing.sequence.pad_sequences(train_data,
                                                        value=word_index["<PAD>"],
                                                        padding='post',
                                                        maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
                                                       value=word_index["<PAD>"],
                                                       padding='post',
                                                       maxlen=256)
print(len(train_data), len(train_data[1]))

配置模型

vocab_size = 10000

model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))

Embedding层相当于处理词向量,之后进行一次卷积操作降维,其余的都是一般的层了。

model.compile(optimizer='adam',
              loss='binary_crossentropy',
              metrics=['accuracy'])

评估模型

扫描二维码关注公众号,回复: 5482543 查看本文章
result = model.evaluate(test_data, test_labels)
print(result)

history_dict = history.history
print(history_dict.keys())

完整版代码

from __future__ import absolute_import, division, print_function

import tensorflow as tf
from tensorflow import keras

print(tf.__version__)

imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)

print("Training e tries: {}, labels: {}".format(len(train_data), len(train_labels)))
print(train_data[0])
print(len(train_data[0]), len(train_data[1]))

word_index = imdb.get_word_index()  # 单词是键值,数字是对应的编码
word_index = {k: (v + 3) for k, v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2
word_index["<UNUSED>"] = 3

reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])


def decode_word_review(text):
    return ' '.join([reverse_word_index.get(i, '?') for i in text])


print(decode_word_review(train_data[0]))

train_data = keras.preprocessing.sequence.pad_sequences(train_data,
                                                        value=word_index["<PAD>"],
                                                        padding='post',
                                                        maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
                                                       value=word_index["<PAD>"],
                                                       padding='post',
                                                       maxlen=256)
print(len(train_data), len(train_data[1]))
# print(train_data[0])

vocab_size = 10000

model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))

model.summary()

model.compile(optimizer='adam',
              loss='binary_crossentropy',
              metrics=['accuracy'])

x_val = train_data[:10000]
partial_x_train = train_data[10000:]

y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]

history = model.fit(partial_x_train,
                    partial_y_train,
                    epochs=40,
                    batch_size=512,
                    validation_data=(x_val, y_val),
                    verbose=1)

result = model.evaluate(test_data, test_labels)
print(result)

history_dict = history.history
print(history_dict.keys())

import matplotlib.pyplot as plt

acc = history_dict['acc']
val_acc = history_dict['val_acc']
loss = history_dict['loss']
val_loss = history_dict['val_loss']

epochs = range(1, len(acc) + 1)

plt.subplot(2, 1, 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()

plt.subplot(2, 1, 2)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()

plt.show()

猜你喜欢

转载自blog.csdn.net/qq_35976351/article/details/87738904