Keras中的深度学习的模型:序列模型(Sequential)和通用模型(Model)

系列文章目录


第一章 Python Keras中的深度学习的模型:序列模型(Sequential)和通用模型(Model)


目录

系列文章目录

一、序列模型(Sequential)

1、list构造

2、add()构造

二、通用模型(Model)

3、示例

总结

一、序列模型(Sequential)

序列模型是一个线性的层次堆栈。可以通过传递一系列 layer 实例给构造器来创建一个序列模型。通过堆叠许多层,构建出深度神经网络。

序列模型的两种创建方式:list构造add()构造 

1、list构造

通过向Sequential模型传递一个layer的list来构造该模型:

from keras.models import Sequential
from keras.layers import Dense, Activation
 
layers = [Dense(32, input_shape = (784,)),
          Activation('relu'),
          Dense(10),
          Activation('softmax')]
 
model = Sequential(layers)

2、add()构造

通过add()逐层向Sequential中添加layer

from keras.models import Sequential
from keras.layers import Dense, Activation
 
model = Sequential() # 定义模型
model.add(Dense(units=64, activation='relu', input_dim=100)) # 定义网络结构
model.add(Dense(units=10, activation='softmax')) # 定义网络结构
model.compile(loss='categorical_crossentropy', # 定义loss函数、优化方法、评估标准
              optimizer='sgd',
              metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5, batch_size=32) # 训练模型
loss_and_metrics = model.evaluate(x_test, y_test, batch_size=128) # 评估模型

二、通用模型(Model)

from keras.layers import Input, Dense
from keras.models import Model
 
# 输入层,确定输入维度
input = Input(shape = (784, ))
# 2个隐含层,每个都有64个神经元,使用relu激活函数,且由上一层作为参数
x = Dense(64, activation='relu')(input)
x = Dense(64, activation='relu')(x)
# 输出层
y = Dense(10, activation='softmax')(x)

model = Model(inputs=input, outputs=y)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(data, labels)

3、示例

一个CNN网络处理一维信号代码如下:

from keras.layers import Conv1D, Dense, Dropout, BatchNormalization, MaxPooling1D, Activation, Flatten,Input
import preprocess
from keras.callbacks import TensorBoard
import matplotlib.pyplot as plt
import numpy as np
from keras.regularizers import l2
import tensorflow as tf
from tensorflow import keras
from keras import Model, layers
import math

batch_size = 128
epochs = 20
num_classes = 10
length = 2048
BatchNorm = True # 是否批量归一化
number = 1000 # 每类样本的数量
normal = True # 是否标准化
rate = [0.7,0.2,0.1] # 测试集验证集划分比例

# 数据路径
path = xxx
# 数据经preprocess预处理
x_train, y_train, x_valid, y_valid, x_test, y_test = preprocess.prepro(d_path=path,length=length,
                                                                  number=number,
                                                                  normal=normal,
                                                                  rate=rate,
                                                                  enc=True, enc_step=28)

x_train, x_valid, x_test = x_train[:,:,np.newaxis], x_valid[:,:,np.newaxis], x_test[:,:,np.newaxis]

# 输入数据的维度
input_shape =x_train.shape[1:]

input = Input(shape = input_shape)

# 卷积层1
x = Conv1D(filters=16, kernel_size=64, strides=16, padding='same',kernel_regularizer=l2(1e-4),input_shape=input_shape)(input)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling1D(pool_size=2)(x)

# 卷积层2
x = Conv1D(filters=32, kernel_size=3, strides=1, padding='same',kernel_regularizer=l2(1e-4))(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling1D(pool_size=2)(x)

# 从卷积到全连接需要展平
x = Flatten()(x)

# 添加全连接层
x = Dense(units=100, activation='relu', kernel_regularizer=l2(1e-4))(x)

# 增加输出层
output = Dense(units=num_classes, activation='softmax', kernel_regularizer=l2(1e-4))(x)
model =Model(inputs = input,outputs = output)
model.compile(optimizer='Adam', loss='categorical_crossentropy',
              metrics=['accuracy'])

# TensorBoard调用查看一下训练情况
tb_cb = TensorBoard(log_dir='logs')


history = model.fit(x=x_train, y=y_train, batch_size=batch_size, epochs=epochs,
          verbose=1, validation_data=(x_valid, y_valid), shuffle=True,
          callbacks=[tb_cb])

# 评估模型
score = model.evaluate(x=x_test, y=y_test, verbose=0)
print("测试集上的损失:", score[0])
print("测试集上的准确率:",score[1])
#plot_model(model=model, to_file='wdcnn.png', show_shapes=True)

#####################################################################
# 绘制训练 & 验证的准确率值
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()

# 绘制训练 & 验证的损失值
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()

总结

记录一下学习到的Keras中序列模型(Sequential)和通用模型(Model)的用法

猜你喜欢

转载自blog.csdn.net/weixin_44779323/article/details/126488830
今日推荐