CNN recognizes 4 weather states (TensorFlow, network optimization)

Project data and source code

Available for download on github:

https://github.com/chenshunpeng/Weather-recognition-based-on-CNN

1. Data processing

Set up the GPU environment

import tensorflow as tf

gpus = tf.config.list_physical_devices("GPU")

if gpus:
    gpu0 = gpus[0]                                        #如果有多个GPU,仅使用第0个GPU
    tf.config.experimental.set_memory_growth(gpu0, True)  #设置GPU显存用量按需使用
    tf.config.set_visible_devices([gpu0],"GPU")

Import Data

import matplotlib.pyplot as plt
import os,PIL

# 设置随机种子尽可能使结果可以重现
import numpy as np
np.random.seed(1)

# 设置随机种子尽可能使结果可以重现
import tensorflow as tf
tf.random.set_seed(1)

from tensorflow import keras
from tensorflow.keras import layers,models

import pathlib

Set data address

data_dir = "E:\demo_study\jupyter\Jupyter_notebook\Weather-recognition-based-on-CNN\weather_photos"
data_dir = pathlib.Path(data_dir)

view data

The data sets are divided into four categories cloudy: , rain, shine, sunriseand are stored in weather_photossubfolders named after their respective names in the folder.

View the total number of pictures:

image_count = len(list(data_dir.glob('*/*.jpg')))
print("图片总数为:",image_count)

output:

图片总数为: 1125

Check out the first image:

roses = list(data_dir.glob('sunrise/*.jpg'))
PIL.Image.open(str(roses[0]))

output:

Please add a picture description

1.1. Data preprocessing

Download Data:

image_dataset_from_directoryLoad data from disk tf.data.Datasetinto the

Set data parameters:

batch_size = 64
img_height = 180
img_width = 180

Load the training set data:

train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="training",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)

output:

Found 1125 files belonging to 4 classes.
Using 900 files for training.

Load the validation set data:

val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="validation",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)

output:

Found 1125 files belonging to 4 classes.
Using 225 files for validation.

By class_namesoutputting the labels of the dataset, the labels correspond alphabetically to the directory names:

class_names = train_ds.class_names
print(class_names)

output:

['cloudy', 'rain', 'shine', 'sunrise']

Viewed train_dsdata type:

train_ds

output:

<PrefetchDataset element_spec=(TensorSpec(shape=(None, 180, 180, 3), dtype=tf.float32, name=None), TensorSpec(shape=(None,), dtype=tf.int32, name=None))>

1.2. Data visualization

# 每次画的图不一样

plt.figure(figsize=(12, 10))

for images, labels in train_ds.take(1):
    for i in range(30):
        ax = plt.subplot(5, 6, i + 1)

        plt.imshow(images[i].numpy().astype("uint8"))
        plt.title(class_names[labels[i]])
        plt.savefig('pic1.jpg', dpi=600) #指定分辨率保存
        plt.axis("off")

output:
Please add a picture description

View image shapes:

for image_batch, labels_batch in train_ds:
    print(image_batch.shape)
    print(labels_batch.shape)
    break
  • Image_batchis a tensor of shape (32,180,180,3). This is a batch of 32 images of shape 180x180x3 (the last dimension refers to the color channel RGB).
  • Label_batchis a tensor of shape (32,), these labels correspond to 32 images

output:

(64, 180, 180, 3)
(64,)

1.3. Configure the dataset

shuffle(): Shuffle the data, for details, please refer to: The understanding of buffer_size in the data set shuffle method

prefetch(): Prefetch data to speed up operation. For details, please refer to: Better performance with the tf.data API

cache(): Cache the data set into the memory to speed up the operation

Recommend a blog: [Study Notes] Use tf.data to optimize the preprocessing process

prefetch()Detailed introduction of the function: it makes the preprocessing and model execution of the training step overlap, and it turns out to be:

insert image description here

prefetch()followed by:

insert image description here
Of course, you can do otherwise

AUTOTUNE = tf.data.AUTOTUNE

train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

2. Construct CNN network

The input of the convolutional neural network (CNN) is in the form of Tensor: (image_height, image_width, color_channels), which contains the image height, width and color information ( color_channelsfor (R,G,B), corresponding to the three color channels of RGB)

First, by layers.experimental.preprocessing.Rescalingprocessing the image (official explanation: rescales and offsets the values ​​of a batch of image (eg go from inputs in the [0, 255]range to inputs in the [0, 1]range)

For a detailed introduction to the function, see: tf.keras.layers.Rescaling

setting scale=1./255that [0, 255]rescales the input in range to [0, 1]the range
parameter input_shapespecifying the size format of the image

2.1. Pooling layer

Pooling layer introduction:

In CNN, a pooling layer is usually added between adjacent convolutional layers. The pooling layer can effectively reduce the size of the parameter matrix, thereby reducing the number of parameters in the last connection layer.

The common 2 types of pooling layers can be seen in the figure below (reference from Zhihu: Portal ):
insert image description here

The role of the pooling layer:

  • An obvious benefit of reducing the feature map is to reduce the amount of parameters , reduce dimensionality, remove redundant information, compress features, simplify network complexity, reduce computation, and reduce memory consumption.
  • One of the most important functions of pooling is to expand neurons, that is, the size of the receptive field of a pixel in the feature map. In shallow convolution, the feature map is still very large, and the actual image area that can be received by one pixel is very small. Although layer-by-layer convolution can communicate with adjacent neurons to a certain extent, its effect is limited. And pooling, by simply and roughly merging several adjacent neurons, the neurons on the reduced feature map can obtain a larger range of information from the original image, thereby extracting higher-order features
  • Average pooling and max pooling represent two strategies for incorporating information from neighboring neurons, respectively. Average pooling can better retain the information of all neurons , and can reflect the average response of the corresponding area; while maximum pooling can preserve details, retain the maximum response of the corresponding area , and prevent strong responses from being weakened by surrounding neurons .

The average pooling layer is used here (officially: tf.keras.layers.AveragePooling2D ), and its template is:

tf.keras.layers.AveragePooling2D(
    pool_size=(2, 2),
    strides=None,
    padding='valid',
    data_format=None,
    **kwargs
)

Here we default the fill shape to be valid(valid), and do not add fill at this time, the resulting output shape is: ⌊ input _ shape − pool _ sizestrides ⌋ + 1 ( input _ shape ≥ pool _ size ) \lfloor \dfrac{input\ _shape-pool\_size}{strides}\rfloor +1( input\_shape\geq pool\_size)stridesinput_shapepool_size+1(input_shapepool_size)

If the padding shape is same(valid), padding will be added at this point, and the resulting output shape is: (in particular, if the stride is 1, the output shape is the same as the input shape) ⌊ input_shape − 1 strides ⌋ + 1 \lfloor \ dfrac{input\_shape-1}{strides}\rfloor +1stridesinput_shape1+1

2.2. Convolution layer

Input Image Matrix III大小: w × w w\times w w×w
convolution kernelKKK size:k × kk\times kk×k
step sizeSSS size:sss
filled withPPP size:ppp

The calculation formula for convolution output size is: o = ( w − k + 2 p ) s + 1 o=\dfrac{\left( w-k+2p\right) }{s}+1o=s(wk+2p)+1

You can use tf.keras.layers.Conv2D()(officially see: tf.keras.layers.Conv2D ) to construct a convolution kernel, the code structure is:

tf.keras.layers.Conv2D(
    filters,
    kernel_size,
    strides=(1, 1),
    padding='valid',
    data_format=None,
    dilation_rate=(1, 1),
    groups=1,
    activation=None,
    use_bias=True,
    kernel_initializer='glorot_uniform',
    bias_initializer='zeros',
    kernel_regularizer=None,
    bias_regularizer=None,
    activity_regularizer=None,
    kernel_constraint=None,
    bias_constraint=None,
    **kwargs
)

Here the network constructs three 3x3 convolution kernels, using reluthe activation function

num_classes = 4

model = models.Sequential([
    layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
    
    layers.Conv2D(16, (3, 3), activation='relu', input_shape=(img_height, img_width, 3)), # 卷积层1,卷积核3*3  
    layers.AveragePooling2D((2, 2)),               # 池化层1,2*2采样
    layers.Conv2D(32, (3, 3), activation='relu'),  # 卷积层2,卷积核3*3
    layers.AveragePooling2D((2, 2)),               # 池化层2,2*2采样
    layers.Conv2D(64, (3, 3), activation='relu'),  # 卷积层3,卷积核3*3
    layers.Dropout(0.3),                    # 防止过拟合,提高模型泛化能力
    
    layers.Flatten(),                       # Flatten层,连接卷积层与全连接层
    layers.Dense(128, activation='relu'),   # 全连接层,特征进一步提取
    layers.Dense(num_classes)               # 输出层,输出预期结果
])

model.summary()  # 打印网络结构

The output is as follows:

Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 rescaling (Rescaling)       (None, 180, 180, 3)       0         
                                                                 
 conv2d (Conv2D)             (None, 178, 178, 16)      448       
                                                                 
 average_pooling2d (AverageP  (None, 89, 89, 16)       0         
 ooling2D)                                                       
                                                                 
 conv2d_1 (Conv2D)           (None, 87, 87, 32)        4640      
                                                                 
 average_pooling2d_1 (Averag  (None, 43, 43, 32)       0         
 ePooling2D)                                                     
                                                                 
 conv2d_2 (Conv2D)           (None, 41, 41, 64)        18496     
                                                                 
 dropout (Dropout)           (None, 41, 41, 64)        0         
                                                                 
 flatten (Flatten)           (None, 107584)            0         
                                                                 
 dense (Dense)               (None, 128)               13770880  
                                                                 
 dense_1 (Dense)             (None, 4)                 516       
                                                                 
=================================================================
Total params: 13,794,980
Trainable params: 13,794,980
Non-trainable params: 0
_________________________________________________________________

2.3. Compile Settings

  • Loss function (loss): It is used to measure the accuracy of the model during training. It is used here sparse_categorical_crossentropy. The principle categorical_crossentropyis the same as (multi-class cross-entropy loss), but the integer code used for the real value (for example, the 0th class is represented by the number 0, and the 0th class is represented by the number 0. The 3 classes are represented by the number 3, which can be seen officially: tf.keras.losses.SparseCategoricalCrossentropy
  • Optimizer (optimizer): decides how the model is updated based on the data it sees and its own loss function, here it is Adam(officially available: tf.keras.optimizers.Adam )
  • Evaluation function (metrics): used to monitor the training and testing steps, this time accuracy, the ratio of correctly classified images (officially available: tf.keras.metrics.Accuracy )
# 设置优化器
opt = tf.keras.optimizers.Adam(learning_rate=0.001)

model.compile(optimizer=opt,
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

2.4. Model training

epochs = 30

history = model.fit(
  train_ds,
  validation_data=val_ds,
  epochs=epochs
)

here step stepstep i t e r a t i o n iteration i t er a t i o n)的个数的:step = ⌈ example N ums ∗ epoch ​ batchsize ⌉ = ⌈ ( 1125 − 225 ) ∗ 1 64 ⌉ = ⌈ 14.0625 ⌉ = 15 step=\lceil \dfrac{exampleNums ∗epoch ​ }{batch size}\rceil=\lceil \dfrac{(1125-225)∗1}{64}\rceil=\lceil 14.0625\rceil=15step=batchsizeexampleNumsepoch=64(1125225)1=14.0625=15

output:

Epoch 1/30
15/15 [==============================] - 12s 293ms/step - loss: 1.3749 - accuracy: 0.5144 - val_loss: 0.6937 - val_accuracy: 0.6533
Epoch 2/30
15/15 [==============================] - 2s 144ms/step - loss: 0.6076 - accuracy: 0.7800 - val_loss: 0.4826 - val_accuracy: 0.7956
Epoch 3/30
15/15 [==============================] - 2s 144ms/step - loss: 0.3909 - accuracy: 0.8589 - val_loss: 0.4557 - val_accuracy: 0.7911
Epoch 4/30
15/15 [==============================] - 2s 145ms/step - loss: 0.2943 - accuracy: 0.8878 - val_loss: 0.4268 - val_accuracy: 0.8356
Epoch 5/30
15/15 [==============================] - 2s 144ms/step - loss: 0.2307 - accuracy: 0.9056 - val_loss: 0.4260 - val_accuracy: 0.8400
Epoch 6/30
15/15 [==============================] - 2s 144ms/step - loss: 0.2000 - accuracy: 0.9267 - val_loss: 0.3143 - val_accuracy: 0.8711
Epoch 7/30
15/15 [==============================] - 2s 144ms/step - loss: 0.1496 - accuracy: 0.9367 - val_loss: 0.3277 - val_accuracy: 0.8844
Epoch 8/30
15/15 [==============================] - 2s 144ms/step - loss: 0.0894 - accuracy: 0.9678 - val_loss: 0.2851 - val_accuracy: 0.9200
Epoch 9/30
15/15 [==============================] - 2s 144ms/step - loss: 0.0638 - accuracy: 0.9800 - val_loss: 0.4995 - val_accuracy: 0.8578
Epoch 10/30
15/15 [==============================] - 2s 145ms/step - loss: 0.1132 - accuracy: 0.9622 - val_loss: 0.4961 - val_accuracy: 0.8356
Epoch 11/30
15/15 [==============================] - 2s 144ms/step - loss: 0.0576 - accuracy: 0.9800 - val_loss: 0.3318 - val_accuracy: 0.8844
Epoch 12/30
15/15 [==============================] - 2s 145ms/step - loss: 0.0417 - accuracy: 0.9878 - val_loss: 0.5433 - val_accuracy: 0.8756
Epoch 13/30
15/15 [==============================] - 2s 145ms/step - loss: 0.0301 - accuracy: 0.9944 - val_loss: 0.3797 - val_accuracy: 0.8978
Epoch 14/30
15/15 [==============================] - 2s 144ms/step - loss: 0.0401 - accuracy: 0.9833 - val_loss: 0.3982 - val_accuracy: 0.8489
Epoch 15/30
15/15 [==============================] - 2s 144ms/step - loss: 0.0260 - accuracy: 0.9922 - val_loss: 0.4777 - val_accuracy: 0.8844
Epoch 16/30
15/15 [==============================] - 2s 144ms/step - loss: 0.0139 - accuracy: 0.9978 - val_loss: 0.3858 - val_accuracy: 0.8978
Epoch 17/30
15/15 [==============================] - 2s 144ms/step - loss: 0.0067 - accuracy: 0.9989 - val_loss: 0.3942 - val_accuracy: 0.9156
Epoch 18/30
15/15 [==============================] - 2s 145ms/step - loss: 0.0059 - accuracy: 0.9989 - val_loss: 0.4101 - val_accuracy: 0.8844
Epoch 19/30
15/15 [==============================] - 2s 145ms/step - loss: 0.0039 - accuracy: 1.0000 - val_loss: 0.5176 - val_accuracy: 0.8889
Epoch 20/30
15/15 [==============================] - 2s 145ms/step - loss: 0.0068 - accuracy: 0.9989 - val_loss: 0.3836 - val_accuracy: 0.9156
Epoch 21/30
15/15 [==============================] - 2s 145ms/step - loss: 0.0116 - accuracy: 0.9989 - val_loss: 0.4635 - val_accuracy: 0.8800
Epoch 22/30
15/15 [==============================] - 2s 145ms/step - loss: 0.0062 - accuracy: 0.9989 - val_loss: 0.4315 - val_accuracy: 0.9022
Epoch 23/30
15/15 [==============================] - 2s 145ms/step - loss: 0.0047 - accuracy: 1.0000 - val_loss: 0.5728 - val_accuracy: 0.8933
Epoch 24/30
15/15 [==============================] - 2s 145ms/step - loss: 0.0090 - accuracy: 0.9989 - val_loss: 0.5049 - val_accuracy: 0.8889
Epoch 25/30
15/15 [==============================] - 2s 145ms/step - loss: 0.0210 - accuracy: 0.9911 - val_loss: 0.5712 - val_accuracy: 0.8756
Epoch 26/30
15/15 [==============================] - 2s 145ms/step - loss: 0.0354 - accuracy: 0.9878 - val_loss: 0.6332 - val_accuracy: 0.8889
Epoch 27/30
15/15 [==============================] - 2s 146ms/step - loss: 0.0781 - accuracy: 0.9789 - val_loss: 0.5726 - val_accuracy: 0.8578
Epoch 28/30
15/15 [==============================] - 2s 144ms/step - loss: 0.0713 - accuracy: 0.9767 - val_loss: 0.5084 - val_accuracy: 0.8889
Epoch 29/30
15/15 [==============================] - 2s 145ms/step - loss: 0.0439 - accuracy: 0.9822 - val_loss: 0.5302 - val_accuracy: 0.8711
Epoch 30/30
15/15 [==============================] - 2s 145ms/step - loss: 0.0343 - accuracy: 0.9878 - val_loss: 0.4488 - val_accuracy: 0.8933

Save the model:

model.save('model')

Output log:

WARNING:absl:Found untraced functions such as _jit_compiled_convolution_op, _jit_compiled_convolution_op, _jit_compiled_convolution_op while saving (showing 3 of 3). These functions will not be directly callable after loading.
INFO:tensorflow:Assets written to: model\assets
INFO:tensorflow:Assets written to: model\assets

3. Model evaluation (acc: 92.00%)

model = tf.keras.models.load_model('model')
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(epochs)

plt.figure(figsize=(12, 5))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend()
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend()
plt.title('Training and Validation Loss')
plt.savefig('pic2.jpg', dpi=600) #指定分辨率保存
plt.show()

output:
Please add a picture description

4. Model optimization (acc: 93.78%)

Optimization ideas:

Training lossIt keeps decreasing and Validation losstends to remain unchanged, indicating that the network is overfitting. We try to make the network structure more complex and the learning rate smaller.

4.1. Optimizing the network

num_classes = 4


model = models.Sequential([
    layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
    
    layers.Conv2D(16, (3, 3), activation='relu', input_shape=(img_height, img_width, 3)), # 卷积层1,卷积核3*3  
    layers.AveragePooling2D((2, 2)),               # 池化层1,2*2采样
    layers.Conv2D(32, (3, 3), activation='relu'),  # 卷积层2,卷积核3*3
    layers.AveragePooling2D((2, 2)),
    layers.Conv2D(64, (3, 3), activation='relu'),  # 卷积层2,卷积核3*3
    layers.AveragePooling2D((2, 2)),               # 池化层2,2*2采样
    layers.Conv2D(128, (3, 3), activation='relu'),  # 卷积层3,卷积核3*3
    layers.Dropout(0.3),  
    
    layers.Flatten(),                       # Flatten层,连接卷积层与全连接层
    layers.Dense(256, activation='relu'),   # 全连接层,特征进一步提取
    layers.Dense(128, activation='relu'),   # 全连接层,特征进一步提取
    layers.Dense(num_classes)               # 输出层,输出预期结果
])

model.summary()  # 打印网络结构

output:

Model: "sequential_7"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 rescaling_7 (Rescaling)     (None, 180, 180, 3)       0         
                                                                 
 conv2d_25 (Conv2D)          (None, 178, 178, 16)      448       
                                                                 
 average_pooling2d_18 (Avera  (None, 89, 89, 16)       0         
 gePooling2D)                                                    
                                                                 
 conv2d_26 (Conv2D)          (None, 87, 87, 32)        4640      
                                                                 
 average_pooling2d_19 (Avera  (None, 43, 43, 32)       0         
 gePooling2D)                                                    
                                                                 
 conv2d_27 (Conv2D)          (None, 41, 41, 64)        18496     
                                                                 
 average_pooling2d_20 (Avera  (None, 20, 20, 64)       0         
 gePooling2D)                                                    
                                                                 
 conv2d_28 (Conv2D)          (None, 18, 18, 128)       73856     
                                                                 
 dropout_8 (Dropout)         (None, 18, 18, 128)       0         
                                                                 
 flatten_7 (Flatten)         (None, 41472)             0         
                                                                 
 dense_16 (Dense)            (None, 256)               10617088  
                                                                 
 dense_17 (Dense)            (None, 128)               32896     
                                                                 
 dense_18 (Dense)            (None, 4)                 516       
                                                                 
=================================================================
Total params: 10,747,940
Trainable params: 10,747,940
Non-trainable params: 0
_________________________________________________________________

4.2. Optimizing the learning rate

# 设置优化器
opt = tf.keras.optimizers.Adam(learning_rate=0.0008)

model.compile(optimizer=opt,
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

Train the model:

epochs = 50

history = model.fit(
  train_ds,
  validation_data=val_ds,
  epochs=epochs
)

output:

Epoch 1/50
15/15 [==============================] - 2s 131ms/step - loss: 1.0564 - accuracy: 0.4989 - val_loss: 0.7628 - val_accuracy: 0.5689
Epoch 2/50
15/15 [==============================] - 2s 123ms/step - loss: 0.7389 - accuracy: 0.6589 - val_loss: 0.8257 - val_accuracy: 0.6711
Epoch 3/50
15/15 [==============================] - 2s 126ms/step - loss: 0.5849 - accuracy: 0.7667 - val_loss: 0.5710 - val_accuracy: 0.7422
Epoch 4/50
15/15 [==============================] - 2s 125ms/step - loss: 0.4292 - accuracy: 0.8389 - val_loss: 0.6052 - val_accuracy: 0.7689
Epoch 5/50
15/15 [==============================] - 2s 140ms/step - loss: 0.3802 - accuracy: 0.8511 - val_loss: 0.7504 - val_accuracy: 0.7556
Epoch 6/50
15/15 [==============================] - 2s 123ms/step - loss: 0.3367 - accuracy: 0.8667 - val_loss: 0.4836 - val_accuracy: 0.8311
Epoch 7/50
15/15 [==============================] - 2s 124ms/step - loss: 0.2773 - accuracy: 0.8889 - val_loss: 0.3823 - val_accuracy: 0.8622
Epoch 8/50
15/15 [==============================] - 2s 125ms/step - loss: 0.2457 - accuracy: 0.9067 - val_loss: 0.3668 - val_accuracy: 0.8622
Epoch 9/50
15/15 [==============================] - 2s 126ms/step - loss: 0.2333 - accuracy: 0.9144 - val_loss: 0.4030 - val_accuracy: 0.8489
Epoch 10/50
15/15 [==============================] - 2s 124ms/step - loss: 0.2526 - accuracy: 0.9089 - val_loss: 0.6440 - val_accuracy: 0.8044
Epoch 11/50
15/15 [==============================] - 2s 123ms/step - loss: 0.2331 - accuracy: 0.9122 - val_loss: 0.4930 - val_accuracy: 0.8444
Epoch 12/50
15/15 [==============================] - 2s 126ms/step - loss: 0.1934 - accuracy: 0.9311 - val_loss: 0.3481 - val_accuracy: 0.8844
Epoch 13/50
15/15 [==============================] - 2s 124ms/step - loss: 0.1471 - accuracy: 0.9367 - val_loss: 0.3174 - val_accuracy: 0.9022
Epoch 14/50
15/15 [==============================] - 2s 122ms/step - loss: 0.1141 - accuracy: 0.9656 - val_loss: 0.4393 - val_accuracy: 0.8578
Epoch 15/50
15/15 [==============================] - 2s 125ms/step - loss: 0.0878 - accuracy: 0.9700 - val_loss: 0.4360 - val_accuracy: 0.8978
Epoch 16/50
15/15 [==============================] - 2s 123ms/step - loss: 0.0718 - accuracy: 0.9744 - val_loss: 0.3478 - val_accuracy: 0.8978
Epoch 17/50
15/15 [==============================] - 2s 123ms/step - loss: 0.0561 - accuracy: 0.9844 - val_loss: 0.3561 - val_accuracy: 0.9200
Epoch 18/50
15/15 [==============================] - 2s 124ms/step - loss: 0.1152 - accuracy: 0.9544 - val_loss: 0.3755 - val_accuracy: 0.9111
Epoch 19/50
15/15 [==============================] - 2s 123ms/step - loss: 0.0642 - accuracy: 0.9778 - val_loss: 0.3634 - val_accuracy: 0.8978
Epoch 20/50
15/15 [==============================] - 2s 124ms/step - loss: 0.0347 - accuracy: 0.9922 - val_loss: 0.3544 - val_accuracy: 0.8978
Epoch 21/50
15/15 [==============================] - 2s 132ms/step - loss: 0.0432 - accuracy: 0.9844 - val_loss: 0.7549 - val_accuracy: 0.8222
Epoch 22/50
15/15 [==============================] - 2s 124ms/step - loss: 0.0641 - accuracy: 0.9811 - val_loss: 0.4202 - val_accuracy: 0.8933
Epoch 23/50
15/15 [==============================] - 2s 123ms/step - loss: 0.0295 - accuracy: 0.9900 - val_loss: 0.4618 - val_accuracy: 0.9200
Epoch 24/50
15/15 [==============================] - 2s 124ms/step - loss: 0.0131 - accuracy: 0.9978 - val_loss: 0.4210 - val_accuracy: 0.9067
Epoch 25/50
15/15 [==============================] - 2s 123ms/step - loss: 0.0172 - accuracy: 0.9944 - val_loss: 0.4878 - val_accuracy: 0.8978
Epoch 26/50
15/15 [==============================] - 2s 125ms/step - loss: 0.0086 - accuracy: 0.9978 - val_loss: 0.4908 - val_accuracy: 0.9111
Epoch 27/50
15/15 [==============================] - 2s 123ms/step - loss: 0.0096 - accuracy: 0.9967 - val_loss: 0.5744 - val_accuracy: 0.8978
Epoch 28/50
15/15 [==============================] - 2s 122ms/step - loss: 0.0058 - accuracy: 0.9989 - val_loss: 0.5868 - val_accuracy: 0.8889
Epoch 29/50
15/15 [==============================] - 2s 126ms/step - loss: 0.0196 - accuracy: 0.9911 - val_loss: 0.5722 - val_accuracy: 0.8578
Epoch 30/50
15/15 [==============================] - 2s 126ms/step - loss: 0.0108 - accuracy: 0.9967 - val_loss: 0.5498 - val_accuracy: 0.9022
Epoch 31/50
15/15 [==============================] - 2s 125ms/step - loss: 0.0041 - accuracy: 1.0000 - val_loss: 0.5006 - val_accuracy: 0.9111
Epoch 32/50
15/15 [==============================] - 2s 124ms/step - loss: 0.0020 - accuracy: 1.0000 - val_loss: 0.5010 - val_accuracy: 0.9067
Epoch 33/50
15/15 [==============================] - 2s 130ms/step - loss: 0.0067 - accuracy: 0.9967 - val_loss: 0.5418 - val_accuracy: 0.9156
Epoch 34/50
15/15 [==============================] - 2s 123ms/step - loss: 0.0092 - accuracy: 0.9967 - val_loss: 0.6289 - val_accuracy: 0.9067
Epoch 35/50
15/15 [==============================] - 2s 122ms/step - loss: 0.0152 - accuracy: 0.9933 - val_loss: 0.5908 - val_accuracy: 0.8978
Epoch 36/50
15/15 [==============================] - 2s 123ms/step - loss: 0.0359 - accuracy: 0.9911 - val_loss: 0.5404 - val_accuracy: 0.8756
Epoch 37/50
15/15 [==============================] - 2s 124ms/step - loss: 0.0492 - accuracy: 0.9811 - val_loss: 0.5009 - val_accuracy: 0.9022
Epoch 38/50
15/15 [==============================] - 2s 125ms/step - loss: 0.0395 - accuracy: 0.9844 - val_loss: 0.4718 - val_accuracy: 0.9244
Epoch 39/50
15/15 [==============================] - 2s 125ms/step - loss: 0.0357 - accuracy: 0.9956 - val_loss: 0.6038 - val_accuracy: 0.8844
Epoch 40/50
15/15 [==============================] - 2s 124ms/step - loss: 0.0309 - accuracy: 0.9889 - val_loss: 0.4189 - val_accuracy: 0.9200
Epoch 41/50
15/15 [==============================] - 2s 126ms/step - loss: 0.0093 - accuracy: 0.9989 - val_loss: 0.5180 - val_accuracy: 0.9067
Epoch 42/50
15/15 [==============================] - 2s 125ms/step - loss: 0.0033 - accuracy: 1.0000 - val_loss: 0.4415 - val_accuracy: 0.9244
Epoch 43/50
15/15 [==============================] - 2s 124ms/step - loss: 0.0016 - accuracy: 1.0000 - val_loss: 0.4622 - val_accuracy: 0.9378
Epoch 44/50
15/15 [==============================] - 2s 125ms/step - loss: 5.7711e-04 - accuracy: 1.0000 - val_loss: 0.4805 - val_accuracy: 0.9333
Epoch 45/50
15/15 [==============================] - 2s 125ms/step - loss: 4.1283e-04 - accuracy: 1.0000 - val_loss: 0.4820 - val_accuracy: 0.9244
Epoch 46/50
15/15 [==============================] - 2s 126ms/step - loss: 3.2792e-04 - accuracy: 1.0000 - val_loss: 0.4859 - val_accuracy: 0.9333
Epoch 47/50
15/15 [==============================] - 2s 127ms/step - loss: 2.7573e-04 - accuracy: 1.0000 - val_loss: 0.4932 - val_accuracy: 0.9289
Epoch 48/50
15/15 [==============================] - 2s 123ms/step - loss: 2.7769e-04 - accuracy: 1.0000 - val_loss: 0.4877 - val_accuracy: 0.9333
Epoch 49/50
15/15 [==============================] - 2s 124ms/step - loss: 2.6387e-04 - accuracy: 1.0000 - val_loss: 0.5107 - val_accuracy: 0.9289
Epoch 50/50
15/15 [==============================] - 2s 122ms/step - loss: 2.1140e-04 - accuracy: 1.0000 - val_loss: 0.4979 - val_accuracy: 0.9378

The maximum value that can be found is 93.78%val_accuracy at the 50th time of training

Visualization of the training process:
Please add a picture description

Guess you like

Origin blog.csdn.net/qq_45550375/article/details/126325300