CNN keras 2.1 图片测试

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/d413122031/article/details/79150887

# 设置模型的参数
input_size = row * column # 100*100
batch_size = 32
hidden_neurons = 30    
epochs = 25

# 构建一个较为简单的CNN模型

model = Sequential()
model.add(Convolution2D(32, (2, 2), input_shape=(row, column, 1)))
model.add(Activation('relu'))
model.add(Convolution2D(32, (2, 2)))  
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.5))                
model.add(Flatten())
 
model.add(Dense(hidden_neurons))
model.add(Activation('relu'))      
model.add(Dense(classes))
model.add(Activation('softmax'))


# Define Loss & compile model  在多分类的时候 损失函数一般用categorical_crossentropy

使用adadelta 具体选择哪个优化器可以参考官方文档

model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='adadelta')
# optimizer - "rmsprop"/"sgd"/"adadelta" , loss - "binary_crossentropy" / "categorical_crossentropy"

# fit the model
model.fit(X_train, Y_train, batch_size=batch_size, epochs=epochs, validation_split = 0.1, verbose=1)

测试结果--------------------------------------------------------------------------------------------------------

测试使用的是百度图片中爬取的 七种狗类

设置了25个epoch  大概在十个之后会测试成功概率会提高 但是 在去掉全连接层的概率不太客观

(100, 100, 1)
Car Train data : 567
Total Train Data : 4778
X_train shape
(4778, 100, 100, 1)
Y_train Shape
(4778, 1)


Total Test Data : 520
(4778, 1)
[0]
Train on 4300 samples, validate on 478 samples
Epoch 1/25
4300/4300 [==============================] - 101s 23ms/step - loss: 1.8854 - acc: 0.1737 - val_loss: 2.6176 - val_acc: 0.0000e+00
Epoch 2/25
4300/4300 [==============================] - 100s 23ms/step - loss: 1.6941 - acc: 0.2781 - val_loss: 4.4016 - val_acc: 0.0000e+00
Epoch 3/25
4300/4300 [==============================] - 105s 24ms/step - loss: 1.4562 - acc: 0.4226 - val_loss: 5.0508 - val_acc: 0.0000e+00
Epoch 4/25
4300/4300 [==============================] - 100s 23ms/step - loss: 1.3374 - acc: 0.4847 - val_loss: 4.5513 - val_acc: 0.0000e+00
Epoch 5/25
4300/4300 [==============================] - 101s 23ms/step - loss: 1.2299 - acc: 0.5372 - val_loss: 4.5998 - val_acc: 0.0000e+00
Epoch 6/25
4300/4300 [==============================] - 101s 23ms/step - loss: 1.1201 - acc: 0.5921 - val_loss: 4.9415 - val_acc: 0.0000e+00
Epoch 7/25
4300/4300 [==============================] - 101s 23ms/step - loss: 1.0057 - acc: 0.6374 - val_loss: 4.3453 - val_acc: 0.0000e+00
Epoch 8/25
4300/4300 [==============================] - 100s 23ms/step - loss: 0.8593 - acc: 0.7053 - val_loss: 4.4878 - val_acc: 0.0000e+00
Epoch 9/25
4300/4300 [==============================] - 100s 23ms/step - loss: 0.7286 - acc: 0.7584 - val_loss: 4.5572 - val_acc: 0.0000e+00
Epoch 10/25
4300/4300 [==============================] - 103s 24ms/step - loss: 0.6007 - acc: 0.8105 - val_loss: 4.1045 - val_acc: 0.0084
Epoch 11/25
4300/4300 [==============================] - 102s 24ms/step - loss: 0.4971 - acc: 0.8581 - val_loss: 4.8304 - val_acc: 0.0042
Epoch 12/25
4300/4300 [==============================] - 100s 23ms/step - loss: 0.4063 - acc: 0.8807 - val_loss: 4.6724 - val_acc: 0.0230
Epoch 13/25
4300/4300 [==============================] - 100s 23ms/step - loss: 0.3318 - acc: 0.9100 - val_loss: 4.8855 - val_acc: 0.0356
Epoch 14/25
4300/4300 [==============================] - 103s 24ms/step - loss: 0.2624 - acc: 0.9307 - val_loss: 4.6594 - val_acc: 0.0628
Epoch 15/25
4300/4300 [==============================] - 105s 24ms/step - loss: 0.2086 - acc: 0.9505 - val_loss: 5.5564 - val_acc: 0.0397
Epoch 16/25
4300/4300 [==============================] - 105s 24ms/step - loss: 0.1722 - acc: 0.9563 - val_loss: 6.2152 - val_acc: 0.0418
Epoch 17/25
4300/4300 [==============================] - 105s 24ms/step - loss: 0.1303 - acc: 0.9714 - val_loss: 5.0574 - val_acc: 0.0962
Epoch 18/25
4300/4300 [==============================] - 103s 24ms/step - loss: 0.1002 - acc: 0.9788 - val_loss: 6.6792 - val_acc: 0.0356
Epoch 19/25
4300/4300 [==============================] - 100s 23ms/step - loss: 0.0906 - acc: 0.9800 - val_loss: 6.8854 - val_acc: 0.0335
Epoch 20/25
4300/4300 [==============================] - 100s 23ms/step - loss: 0.0670 - acc: 0.9872 - val_loss: 5.9952 - val_acc: 0.0921
Epoch 21/25
4300/4300 [==============================] - 103s 24ms/step - loss: 0.0567 - acc: 0.9900 - val_loss: 7.9824 - val_acc: 0.0126
Epoch 22/25
4300/4300 [==============================] - 104s 24ms/step - loss: 0.0462 - acc: 0.9935 - val_loss: 8.4218 - val_acc: 0.0146
Epoch 23/25
4300/4300 [==============================] - 104s 24ms/step - loss: 0.0381 - acc: 0.9933 - val_loss: 8.3374 - val_acc: 0.0356
Epoch 24/25
4300/4300 [==============================] - 104s 24ms/step - loss: 0.0320 - acc: 0.9947 - val_loss: 9.1571 - val_acc: 0.0188
Epoch 25/25
4300/4300 [==============================] - 100s 23ms/step - loss: 0.0252 - acc: 0.9956 - val_loss: 6.1695 - val_acc: 0.1423
520/520 [==============================] - 3s 5ms/step
('\nTest accuracy:', 0.21153846153846154)




猜你喜欢

转载自blog.csdn.net/d413122031/article/details/79150887