TensorFlow2.0教程9:文本分类

  我们将构建一个简单的文本分类器,并使用IMDB进行训练和测试

  from __future__ import absolute_import, division, print_function

  import tensorflow as tf

  from tensorflow import keras

  import numpy as np

  print(tf.__version__)

  2.0.0-alpha0

  1.IMDB数据集

  下载

  imdb=keras.datasets.imdb

  (train_x, train_y), (test_x, text_y)=keras.datasets.imdb.load_data(num_words=10000)

  了解IMDB数据

  print("Training entries: {}, labels: {}".format(len(train_x), len(train_y)))

  print(train_x[0])

  print('len: ',len(train_x[0]), len(train_x[1]))

  Training entries: 25000, labels: 25000

  [1, 14, 22, 16, 43, 530, 973, 1622, 1385, 65, 458, 4468, 66, 3941, 4, 173, 36, 256, 5, 25, 100, 43, 838, 112, 50, 670, 2, 9, 35, 480, 284, 5, 150, 4, 172, 112, 167, 2, 336, 385, 39, 4, 172, 4536, 1111, 17, 546, 38, 13, 447, 4, 192, 50, 16, 6, 147, 2025, 19, 14, 22, 4, 1920, 4613, 469, 4, 22, 71, 87, 12, 16, 43, 530, 38, 76, 15, 13, 1247, 4, 22, 17, 515, 17, 12, 16, 626, 18, 2, 5, 62, 386, 12, 8, 316, 8, 106, 5, 4, 2223, 5244, 16, 480, 66, 3785, 33, 4, 130, 12, 16, 38, 619, 5, 25, 124, 51, 36, 135, 48, 25, 1415, 33, 6, 22, 12, 215, 28, 77, 52, 5, 14, 407, 16, 82, 2, 8, 4, 107, 117, 5952, 15, 256, 4, 2, 7, 3766, 5, 723, 36, 71, 43, 530, 476, 26, 400, 317, 46, 7, 4, 2, 1029, 13, 104, 88, 4, 381, 15, 297, 98, 32, 2071, 56, 26, 141, 6, 194, 7486, 18, 4, 226, 22, 21, 134, 476, 26, 480, 5, 144, 30, 5535, 18, 51, 36, 28, 224, 92, 25, 104, 4, 226, 65, 16, 38, 1334, 88, 12, 16, 283, 5, 16, 4472, 113, 103, 32, 15, 16, 5345, 19, 178, 32]

  len: 218 189

  创建id和词的匹配字典

  word_index = imdb.get_word_index()

  word2id = {k:(v+3) for k, v in word_index.items()}

  word2id[''] = 0

  word2id[''] = 1

  word2id[''] = 2

  word2id[''] = 3

  id2word = {v:k for k, v in word2id.items()}

  def get_words(sent_ids):

  return ' '.join([id2word.get(i, '?') for i in sent_ids])

  sent = get_words(train_x[0])

  print(sent)

  this film was just brilliant casting location scenery story direction everyone's really suited the part they played and you could just imagine being there robert is an amazing actor and now the same being director father came from the same scottish island as myself so i loved the fact there was a real connection with this film the witty remarks throughout the film were great it was just brilliant so much that i bought the film as soon as it was released for and would recommend it to everyone to watch and the fly fishing was amazing really cried at the end it was so sad and you know what they say if you cry at a film it must have been good and this definitely was also to the two little boy's that played the of norman and paul they were just brilliant children are often left out of the list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was shared with us all

  2.准备数据

  # 句子末尾padding

  train_x = keras.preprocessing.sequence.pad_sequences(

  train_x, value=word2id[''],

  padding='post', maxlen=256

  )

  test_x = keras.preprocessing.sequence.pad_sequences(

  test_x, value=word2id[''],

  padding='post', maxlen=256

  )

  print(train_x[0])

  print('len: ',len(train_x[0]), len(train_x[1]))

  [ 1 14 22 16 43 530 973 1622 1385 65 458 4468 66 3941

  4 173 36 256 5 25 100 43 838 112 50 670 2 9

  35 480 284 5 150 4 172 112 167 2 336 385 39 4

  172 4536 1111 17 546 38 13 447 4 192 50 16 6 147

  2025 19 14 22 4 1920 4613 469 4 22 71 87 12 16

  43 530 38 76 15 13 1247 4 22 17 515 17 12 16

  626 18 2 5 62 386 12 8 316 8 106 5 4 2223

  5244 16 480 66 3785 33 4 130 12 16 38 619 5 25

  124 51 36 135 48 25 1415 33 6 22 12 215 28 77

  52 5 14 407 16 82 2 8 4 107 117 5952 15 256

  4 2 7 3766 5 723 36 71 43 530 476 26 400 317

  46 7 4 2 1029 13 104 88 4 381 15 297 98 32

  2071 56 26 141 6 194 7486 18 4 226 22 21 134 476

  26 480 5 144 30 5535 18 51 36 28 224 92 25 104

  4 226 65 16 38 1334 88 12 16 283 5 16 4472 113

  103 32 15 16 5345 19 178 32 0 0 0 0 0 0

  0 0 0 0 0 0 0 0 0 0 0 0 0 0

  0 0 0 0 0 0 0 0 0 0 0 0 0 0

  0 0 0 0]

  len: 256 256

  3.构建模型

  import tensorflow.keras.layers as layers

  vocab_size = 10000

  model = keras.Sequential()

  model.add(layers.Embedding(vocab_size, 16))

  model.add(layers.GlobalAveragePooling1D())

  model.add(layers.Dense(16, activation='relu'))

  model.add(layers.Dense(1, activation='sigmoid'))

  model.summary()

  model.compile(optimizer='adam',

  loss='binary_crossentropy',

  metrics=['accuracy'])

  Model: "sequential"

  _________________________________________________________________

  Layer (type) Output Shape Param #

  =================================================================

  embedding (Embedding) (None, None, 16) 160000

  _________________________________________________________________

  global_average_pooling1d (Gl (None, 16) 0

  _________________________________________________________________

  dense (Dense) (None, 16) 272

  _________________________________________________________________

  dense_1 (Dense) (None, 1) 17

  =================================================================

  Total params: 160,289

  Trainable params: 160,289

  Non-trainable params: 0

  _________________________________________________________________

  4.模型训练与验证

  x_val = train_x[:10000]

  x_train = train_x[10000:]

  y_val = train_y[:10000]

  y_train = train_y[10000:]

  history = model.fit(x_train,y_train,

  epochs=40, batch_size=512,

  validation_data=(x_val, y_val),

  verbose=1)

  result = model.evaluate(test_x, text_y)

  print(result)

  Train on 15000 samples, validate on 10000 samples

  Epoch 1/40

  15000/15000 [==============================] - 1s 73us/sample - loss: 0.6919 - accuracy: 0.5071 - val_loss: 0.6901 - val_accuracy: 0.5101

  Epoch 2/40

  15000/15000 [==============================] - 1s 44us/sample - loss: 0.6864 - accuracy: 0.6242 - val_loss: 0.6829 - val_accuracy: 0.6380

  Epoch 3/40

  15000/15000 [==============================] - 1s 42us/sample - loss: 0.6752 - accuracy: 0.6881 - val_loss: 0.6691 - val_accuracy: 0.7091

  Epoch 4/40

  15000/15000 [==============================] - 1s 45us/sample - loss: 0.6559 - accuracy: 0.7162 - val_loss: 0.6471 - val_accuracy: 0.7509

  Epoch 5/40

  15000/15000 [==============================] - 1s 44us/sample - loss: 0.6274 - accuracy: 0.7697 - val_loss: 0.6175 - val_accuracy: 0.7724

  Epoch 6/40

  15000/15000 [==============================] - 1s 43us/sample - loss: 0.5909 - accuracy: 0.8049 - val_loss: 0.5821 - val_accuracy: 0.7869

  Epoch 7/40

  15000/15000 [==============================] - 1s 45us/sample - loss: 0.5490 - accuracy: 0.8208 - val_loss: 0.5418 - val_accuracy: 0.8158

  Epoch 8/40

  15000/15000 [==============================] - 1s 42us/sample - loss: 0.5054 - accuracy: 0.8437 - val_loss: 0.5030 - val_accuracy: 0.8285

  Epoch 9/40

  15000/15000 [==============================] - 1s 45us/sample - loss: 0.4630 - accuracy: 0.8557 - val_loss: 0.4662 - val_accuracy: 0.8400

  Epoch 10/40

  15000/15000 [==============================] - 1s 49us/sample - loss: 0.4239 - accuracy: 0.8707 - val_loss: 0.4345 - val_accuracy: 0.8470

  Epoch 11/40

  15000/15000 [==============================] - 1s 46us/sample - loss: 0.3896 - accuracy: 0.8772 - val_loss: 0.4070 - val_accuracy: 0.8563

  Epoch 12/40

  15000/15000 [==============================] - 1s 47us/sample - loss: 0.3599 - accuracy: 0.8867 - val_loss: 0.3856 - val_accuracy: 0.8594

  Epoch 13/40

  15000/15000 [==============================] - 1s 44us/sample - loss: 0.3352 - accuracy: 0.8925 - val_loss: 0.3660 - val_accuracy: 0.8646

  Epoch 14/40

  15000/15000 [==============================] - 1s 44us/sample - loss: 0.3131 - accuracy: 0.8978 - val_loss: 0.3517 - val_accuracy: 0.8697

  Epoch 15/40

  15000/15000 [==============================] - 1s 48us/sample - loss: 0.2947 - accuracy: 0.9013 - val_loss: 0.3392 - val_accuracy: 0.8716

  Epoch 16/40

  15000/15000 [==============================] - 1s 44us/sample - loss: 0.2782 - accuracy: 0.9077 - val_loss: 0.3293 - val_accuracy: 0.8747

  Epoch 17/40

  15000/15000 [==============================] - 1s 45us/sample - loss: 0.2632 - accuracy: 0.9126 - val_loss: 0.3208 - val_accuracy: 0.8757

  Epoch 18/40

  15000/15000 [==============================] - 1s 43us/sample - loss: 0.2500 - accuracy: 0.9159 - val_loss: 0.3132 - val_accuracy: 0.8800

  Epoch 19/40

  15000/15000 [==============================] - 1s 46us/sample - loss: 0.2381 - accuracy: 0.9197 - val_loss: 0.3073 - val_accuracy: 0.8792

  Epoch 20/40

  15000/15000 [==============================] - 1s 44us/sample - loss: 0.2274 - accuracy: 0.9229 - val_loss: 0.3029 - val_accuracy: 0.8801

  Epoch 21/40

  15000/15000 [==============================] - 1s 44us/sample - loss: 0.2167 - accuracy: 0.9277 - val_loss: 0.2992 - val_accuracy: 0.8811

  Epoch 22/40

  15000/15000 [==============================] - 1s 43us/sample - loss: 0.2077 - accuracy: 0.9299 - val_loss: 0.2951 - val_accuracy: 0.8835

  Epoch 23/40

  15000/15000 [==============================] - 1s 42us/sample - loss: 0.1986 - accuracy: 0.9335 - val_loss: 0.2931 - val_accuracy: 0.8827

  Epoch 24/40

  15000/15000 [==============================] - 1s 42us/sample - loss: 0.1907 - accuracy: 0.9371 - val_loss: 0.2911 - val_accuracy: 0.8835

  Epoch 25/40

  15000/15000 [==============================] - 1s 42us/sample - loss: 0.1828 - accuracy: 0.9415 - val_loss: 0.2885 - val_accuracy: 0.8841

  Epoch 26/40

  15000/15000 [==============================] - 1s 43us/sample - loss: 0.1756 - accuracy: 0.9436 - val_loss: 0.2884 - val_accuracy: 0.8840

  Epoch 27/40

  15000/15000 [==============================] - 1s 42us/sample - loss: 0.1689 - accuracy: 0.9463 - val_loss: 0.2870 - val_accuracy: 0.8836

  Epoch 28/40

  15000/15000 [==============================] - 1s 41us/sample - loss: 0.1624 - accuracy: 0.9497 - val_loss: 0.2870 - val_accuracy: 0.8853

  Epoch 29/40

  15000/15000 [==============================] - 1s 46us/sample - loss: 0.1568 - accuracy: 0.9523 - val_loss: 0.2872 - val_accuracy: 0.8840

  Epoch 30/40

  15000/15000 [==============================] - 1s 43us/sample - loss: 0.1509 - accuracy: 0.9534 - val_loss: 0.2864 - val_accuracy: 0.8858

  Epoch 31/40

  15000/15000 [==============================] - 1s 43us/sample - loss: 0.1449 - accuracy: 0.9567 - val_loss: 0.2866 - val_accuracy: 0.8858

  Epoch 32/40

  15000/15000 [==============================] - 1s 45us/sample - loss: 0.1395 - accuracy: 0.9595 - val_loss: 0.2874 - val_accuracy: 0.8856

  Epoch 33/40

  15000/15000 [==============================] - 1s 43us/sample - loss: 0.1343 - accuracy: 0.9600 - val_loss: 0.2888 - val_accuracy: 0.8863

  Epoch 34/40

  15000/15000 [==============================] - 1s 44us/sample - loss: 0.1297 - accuracy: 0.9623 - val_loss: 0.2903 - val_accuracy: 0.8843

  Epoch 35/40

  15000/15000 [==============================] - 1s 43us/sample - loss: 0.1255 - accuracy: 0.9630 - val_loss: 0.2915 - val_accuracy: 0.8870

  Epoch 36/40

  15000/15000 [==============================] - 1s 42us/sample - loss: 0.1208 - accuracy: 0.9659 - val_loss: 0.2928 - val_accuracy: 0.8862

  Epoch 37/40

  15000/15000 [==============================] - 1s 48us/sample - loss: 0.1162 - accuracy: 0.9679 - val_loss: 0.2949 - val_accuracy: 0.8851

  Epoch 38/40

  15000/15000 [==============================] - 1s 49us/sample - loss: 0.1121 - accuracy: 0.9691 - val_loss: 0.2975 - val_accuracy: 0.8848

  Epoch 39/40

  15000/15000 [==============================] - 1s 49us/sample - loss: 0.1088 - accuracy: 0.9697 - val_loss: 0.3003 - val_accuracy: 0.8840

  Epoch 40/40无锡人流医院哪家好 http://www.wxbhnkyy120.com/

  15000/15000 [==============================] - 1s 45us/sample - loss: 0.1046 - accuracy: 0.9721 - val_loss: 0.3022 - val_accuracy: 0.8843

  25000/25000 [==============================] - 1s 22us/sample - loss: 0.3216 - accuracy: 0.8729

  [0.32155542838573453, 0.87292]

  5.查看准确率时序图

  import matplotlib.pyplot as plt

  history_dict = history.history

  history_dict.keys()

  acc = history_dict['accuracy']

  val_acc = history_dict['val_accuracy']

  loss = history_dict['loss']

  val_loss = history_dict['val_loss']

  epochs = range(1, len(acc)+1)

  plt.plot(epochs, loss, 'bo', label='train loss')

  plt.plot(epochs, val_loss, 'b', label='val loss')

  plt.title('Train and val loss')

  plt.xlabel('Epochs')

  plt.xlabel('loss')

  plt.legend()

  plt.show()

  

png

  plt.clf() # clear figure

  plt.plot(epochs, acc, 'bo', label='Training acc')

  plt.plot(epochs, val_acc, 'b', label='Validation acc')

  plt.title('Training and validation accuracy')

  plt.xlabel('Epochs')

  plt.ylabel('Accuracy')

  plt.legend()

  plt.show()

  

png

猜你喜欢

转载自www.cnblogs.com/gnz49/p/11435194.html