Use MATLAB to build a 1DCNN model for time series classification

MATALB version is 2022b

data

In order to facilitate the verification of the feasibility of the experiment, the data required for the experiment are given here.
The data is the acceleration data generated during human activities, including the X-axis, Y-axis, Z-axis and resultant acceleration of the three-axis sensor.

Baidu network disk link: https://pan.baidu.com/s/1xHufym00DH6fC6ZfylVUVA
Extraction code: 2023

This data set is only for your verification experiments, and it is prohibited to be used for dissertations and journal papers, etc.

The model training program and the dataset folder are in the same directory by default. If they are not in the same directory, you need to modify the file path in the source code

The dataset folder contains two folders, train and test, and each folder contains five files: A.xlsx, X.xlsx, Y.xlsx, Z.xlsx and label.xlsx.

model building

The deep learning model built by MATLAB can be built directly by using the code or by using the toolbox. The following will introduce the use of the toolbox to build the 1DCNN model.

1. Click the "Deep Network Designer" toolbox under the APP tab.
insert image description here
2. Create a new blank network.
insert image description here
3. Build the model.
You can build the network by dragging the modules on the left.
insert image description here

4. Parameter setting

The parameters of each layer in the model, such as the number or size of convolution kernels of the convolutional layer, can be set in the "Properties" on the right by clicking the corresponding module
insert image description here

5. After the model is built, parameters such as the optimizer need to be set

Click on "Training"

insert image description here
Click on "Training Options"
insert image description here

You can set parameters such as optimizer, learning rate, batch size, etc., and modify parameters according to your own needs
insert image description here

6. Export as code (2DCNN can be trained directly using the toolbox, 1DCNN I have not implemented yet), or exported to the workspace, depending on your needs. After generating the code, you can directly modify the parameters in the .m file, which is more convenient.
insert image description here

7. The interface after export

insert image description here

8. Copy the content in the created layer group to your own .m file, and then run it to realize the training of the model

It is worth noting that the generated code does not include the configuration of parameters such as optimizer and batch size.

The code required for the model training part is given below

clear all
close all

%% 数据加载和生成
% 数据说明:人体活动加速度数据,包括三轴加速度的X、Y、Z轴和合加速度
train_a = xlsread('dataset\train\A.xlsx');
train_x = xlsread('dataset\train\X.xlsx');
train_y = xlsread('dataset\train\Y.xlsx');
train_z = xlsread('dataset\train\Z.xlsx');
train_label = xlsread('dataset\train\label.xlsx');

test_a = xlsread('dataset\test\A.xlsx');
test_x = xlsread('dataset\test\X.xlsx');
test_y = xlsread('dataset\test\Y.xlsx');
test_z = xlsread('dataset\test\Z.xlsx');
test_label = xlsread('dataset\test\label.xlsx');

% 获取数组维度
[trainRow, trainCol] = size(train_a);
[testRow, testCol] = size(test_a);
% 创建存储加速度数据的元组
train = cell(trainRow,1);
test = cell(testRow, 1);
% 数据集生成
for i = 1:trainRow
    train{
    
    i, 1} = [train_a(i,:); train_x(i,:); train_y(i,:); train_z(i,:)];
end

for i = 1:testRow
    test{
    
    i, 1} = [test_a(i,:); test_x(i,:); test_y(i,:); test_z(i,:)];
end

%% 训练数据处理
% 标签类型转换
train_label = string(train_label);
train_label = categorical(train_label);
test_label = string(test_label);
test_label = categorical(test_label);
% 训练集测试集划分,选择总训练集的80%的数据用来作训练集,20%的数据作验证集
row_trian_1 = length(train)*0.8;
row_trian = int64(row_trian_1);
XTrain = train(1:row_trian, 1);
TTrain = train_label(1:row_trian, 1);
XValidation = train(row_trian:length(train), 1);
TValidation = train_label(row_trian:length(train), 1);
% 类别数量获取
numClasses = numel(categories(test_label));

%% 网络设计
kernelSize = 9;
layers = [
    sequenceInputLayer(4,"Name","sequence","MinLength",128)
    convolution1dLayer(kernelSize,32,"Name","conv1d","Padding","same")
    batchNormalizationLayer("Name","batchnorm")
    reluLayer("Name","relu")
    maxPooling1dLayer(2,"Name","maxpool1d","Padding","same","Stride",2)

    convolution1dLayer(kernelSize,64,"Name","conv1d_1","Padding","same")
    batchNormalizationLayer("Name","batchnorm_1")
    reluLayer("Name","relu_1")
    maxPooling1dLayer(2,"Name","maxpool1d_1","Padding","same","Stride",2)
    
    convolution1dLayer(kernelSize,128,"Name","conv1d_2","Padding","same")
    batchNormalizationLayer("Name","batchnorm_2")
    reluLayer("Name","relu_2")
    maxPooling1dLayer(2,"Name","maxpool1d_2","Padding","same","Stride",2)

    convolution1dLayer(kernelSize,256,"Name","conv1d_3","Padding","same")
    batchNormalizationLayer("Name","batchnorm_3")
    reluLayer("Name","relu_3")
    maxPooling1dLayer(2,"Name","maxpool1d_3","Padding","same","Stride",2)

    convolution1dLayer(kernelSize,512,"Name","conv1d_3","Padding","same")
    batchNormalizationLayer("Name","batchnorm_3")
    reluLayer("Name","relu_3")
    maxPooling1dLayer(2,"Name","maxpool1d_3","Padding","same","Stride",2)

    globalAveragePooling1dLayer("Name","gapool1d")

    fullyConnectedLayer(128,"Name","fc")
    reluLayer("Name","relu_4")
    fullyConnectedLayer(64,"Name","fc_1")
    reluLayer("Name","relu_5")
    fullyConnectedLayer(numClasses,"Name","fc_2")
    softmaxLayer("Name","softmax")
    classificationLayer("Name","classoutput")];

%% 网络训练参数设置
% 批大小
miniBatchSize = 128;
% 优化器设置
options = trainingOptions("adam", ...
    MiniBatchSize=miniBatchSize, ...
    MaxEpochs=10, ...
    ValidationData={
    
    XValidation,TValidation}, ...
    Plots="training-progress", ...
    Verbose=0, ...
    LearnRateSchedule="piecewise",...
    LearnRateDropPeriod=10);
% 模型训练
[net, info] = trainNetwork(XTrain,TTrain,layers,options);

model training

This is the training progress obtained by running the above code

insert image description here

model save

%% 模型保存
% net代表上面模型的名字
% model.mat代表保存后的模型名字和路径
save('model.mat', "net")

model loading

% 请注意,代码最右侧的net一定要写,这与保存模型时,模型的名称对应
net = load("model.mat").net;

model evaluation

The trained model needs to be evaluated using evaluation indicators to determine whether the performance of the model meets our requirements. The calculation codes for the accuracy, recall, and F1 scores of each category, as well as the macro average (macro) accuracy, recall, and F1 scores are given below

%% 模型评估
YPred = classify(net, test);
YPred = categorical(YPred);
acc = mean(YPred == test_label);
% 混淆矩阵
m = confusionmat(test_label,YPred);
% 绘制混淆矩阵
confusionchart(m,["类别1","类别2","类别3","类别4","类别5","类别6","类别7","类别8"])

A = m;
% 计算第一类的评价指标
c1_precise = m(1,1)/sum(m(:,1));
c1_recall = m(1,1)/sum(m(1,:));
c1_F1 = 2*c1_precise*c1_recall/(c1_precise+c1_recall);
% 计算第二类的评价指标
c2_precise = m(2,2)/sum(m(:,2));
c2_recall = m(2,2)/sum(m(2,:));
c2_F1 = 2*c2_precise*c2_recall/(c2_precise+c2_recall);
% 计算第三类的评价指标
c3_precise = m(3,3)/sum(m(:,3));
c3_recall = m(3,3)/sum(m(3,:));
c3_F1 = 2*c3_precise*c3_recall/(c3_precise+c3_recall);
% 计算第四类的评价指标
c4_precise = m(4,4)/sum(m(:,4));
c4_recall = m(4,4)/sum(m(4,:));
c4_F1 = 2*c4_precise*c4_recall/(c4_precise+c4_recall);
% 计算第五类的评价指标
c5_precise = m(5,5)/sum(m(:,5));
c5_recall = m(5,5)/sum(m(5,:));
c5_F1 = 2*c5_precise*c5_recall/(c5_precise+c5_recall);
% 计算第六类的评价指标
c6_precise = m(6,6)/sum(m(:,6));
c6_recall = m(6,6)/sum(m(6,:));
c6_F1 = 2*c6_precise*c6_recall/(c6_precise+c6_recall);
% 计算第七类的评价指标
c7_precise = m(7,7)/sum(m(:,7));
c7_recall = m(7,7)/sum(m(7,:));
c7_F1 = 2*c7_precise*c7_recall/(c7_precise+c7_recall);

macroPrecise = (c1_precise+c2_precise+c3_precise+c4_precise+c5_precise+c6_precise+c7_precise)/7;
macroRecall = (c1_recall+c2_recall+c3_recall+c4_recall+c5_recall+c6_recall+c7_recall)/7;
macroF1 = (c1_F1+c2_F1+c3_F1+c4_F1+c5_F1+c6_F1+c7_F1)/7;

MATLAB's default confusion matrix uses numbers to represent categories, and you can also change it yourself. The sixth and eighth lines in the above code are the key to changing the confusion matrix category. If you use MATLAB's default confusion matrix, the result is the first picture below. If the confusion matrix generated according to the code in this article is the second picture
insert image description here
insert image description here

full source code

clear all
close all

%% 数据加载和生成
% 数据说明:人体活动加速度数据,包括三轴加速度的X、Y、Z轴和合加速度
train_a = xlsread('dataset\train\A.xlsx');
train_x = xlsread('dataset\train\X.xlsx');
train_y = xlsread('dataset\train\Y.xlsx');
train_z = xlsread('dataset\train\Z.xlsx');
train_label = xlsread('dataset\train\label.xlsx');

test_a = xlsread('dataset\test\A.xlsx');
test_x = xlsread('dataset\test\X.xlsx');
test_y = xlsread('dataset\test\Y.xlsx');
test_z = xlsread('dataset\test\Z.xlsx');
test_label = xlsread('dataset\test\label.xlsx');

% 获取数组维度
[trainRow, trainCol] = size(train_a);
[testRow, testCol] = size(test_a);
% 创建存储加速度数据的元组
train = cell(trainRow,1);
test = cell(testRow, 1);
% 数据集生成
for i = 1:trainRow
    train{
    
    i, 1} = [train_a(i,:); train_x(i,:); train_y(i,:); train_z(i,:)];
end

for i = 1:testRow
    test{
    
    i, 1} = [test_a(i,:); test_x(i,:); test_y(i,:); test_z(i,:)];
end

%% 训练数据处理
% 标签类型转换
train_label = string(train_label);
train_label = categorical(train_label);
test_label = string(test_label);
test_label = categorical(test_label);
% 训练集测试集划分,选择总训练集的80%的数据用来作训练集,20%的数据作验证集
row_trian_1 = length(train)*0.8;
row_trian = int64(row_trian_1);
XTrain = train(1:row_trian, 1);
TTrain = train_label(1:row_trian, 1);
XValidation = train(row_trian:length(train), 1);
TValidation = train_label(row_trian:length(train), 1);
% 类别数量获取
numClasses = numel(categories(test_label));

%% 网络设计
kernelSize = 9;
layers = [
    sequenceInputLayer(4,"Name","sequence","MinLength",128)
    convolution1dLayer(kernelSize,32,"Name","conv1d","Padding","same")
    batchNormalizationLayer("Name","batchnorm")
    reluLayer("Name","relu")
    maxPooling1dLayer(2,"Name","maxpool1d","Padding","same","Stride",2)

    convolution1dLayer(kernelSize,64,"Name","conv1d_1","Padding","same")
    batchNormalizationLayer("Name","batchnorm_1")
    reluLayer("Name","relu_1")
    maxPooling1dLayer(2,"Name","maxpool1d_1","Padding","same","Stride",2)
    
    convolution1dLayer(kernelSize,128,"Name","conv1d_2","Padding","same")
    batchNormalizationLayer("Name","batchnorm_2")
    reluLayer("Name","relu_2")
    maxPooling1dLayer(2,"Name","maxpool1d_2","Padding","same","Stride",2)

    convolution1dLayer(kernelSize,256,"Name","conv1d_3","Padding","same")
    batchNormalizationLayer("Name","batchnorm_3")
    reluLayer("Name","relu_3")
    maxPooling1dLayer(2,"Name","maxpool1d_3","Padding","same","Stride",2)

    convolution1dLayer(kernelSize,512,"Name","conv1d_3","Padding","same")
    batchNormalizationLayer("Name","batchnorm_3")
    reluLayer("Name","relu_3")
    maxPooling1dLayer(2,"Name","maxpool1d_3","Padding","same","Stride",2)

    globalAveragePooling1dLayer("Name","gapool1d")

    fullyConnectedLayer(128,"Name","fc")
    reluLayer("Name","relu_4")
    fullyConnectedLayer(64,"Name","fc_1")
    reluLayer("Name","relu_5")
    fullyConnectedLayer(numClasses,"Name","fc_2")
    softmaxLayer("Name","softmax")
    classificationLayer("Name","classoutput")];

%% 网络训练参数设置
% 批大小
miniBatchSize = 128;
% 优化器设置
options = trainingOptions("adam", ...
    MiniBatchSize=miniBatchSize, ...
    MaxEpochs=10, ...
    ValidationData={
    
    XValidation,TValidation}, ...
    Plots="training-progress", ...
    Verbose=0, ...
    LearnRateSchedule="piecewise",...
    LearnRateDropPeriod=10);
% 模型训练
[net, info] = trainNetwork(XTrain,TTrain,layers,options);

%% 模型保存
save('model.mat', "net")

%% 模型加载
net = load("model.mat").net;

%% 模型评估
YPred = classify(net, test);
YPred = categorical(YPred);
acc = mean(YPred == test_label);
% 混淆矩阵
m = confusionmat(test_label,YPred);
% 绘制混淆矩阵
confusionchart(m,["类别1","类别2","类别3","类别4","类别5","类别6","类别7","类别8"])

% 计算第一类的评价指标
c1_precise = m(1,1)/sum(m(:,1));
c1_recall = m(1,1)/sum(m(1,:));
c1_F1 = 2*c1_precise*c1_recall/(c1_precise+c1_recall);
% 计算第二类的评价指标
c2_precise = m(2,2)/sum(m(:,2));
c2_recall = m(2,2)/sum(m(2,:));
c2_F1 = 2*c2_precise*c2_recall/(c2_precise+c2_recall);
% 计算第三类的评价指标
c3_precise = m(3,3)/sum(m(:,3));
c3_recall = m(3,3)/sum(m(3,:));
c3_F1 = 2*c3_precise*c3_recall/(c3_precise+c3_recall);
% 计算第四类的评价指标
c4_precise = m(4,4)/sum(m(:,4));
c4_recall = m(4,4)/sum(m(4,:));
c4_F1 = 2*c4_precise*c4_recall/(c4_precise+c4_recall);
% 计算第五类的评价指标
c5_precise = m(5,5)/sum(m(:,5));
c5_recall = m(5,5)/sum(m(5,:));
c5_F1 = 2*c5_precise*c5_recall/(c5_precise+c5_recall);
% 计算第六类的评价指标
c6_precise = m(6,6)/sum(m(:,6));
c6_recall = m(6,6)/sum(m(6,:));
c6_F1 = 2*c6_precise*c6_recall/(c6_precise+c6_recall);
% 计算第七类的评价指标
c7_precise = m(7,7)/sum(m(:,7));
c7_recall = m(7,7)/sum(m(7,:));
c7_F1 = 2*c7_precise*c7_recall/(c7_precise+c7_recall);

macroPrecise = (c1_precise+c2_precise+c3_precise+c4_precise+c5_precise+c6_precise+c7_precise)/7;
macroRecall = (c1_recall+c2_recall+c3_recall+c4_recall+c5_recall+c6_recall+c7_recall)/7;
macroF1 = (c1_F1+c2_F1+c3_F1+c4_F1+c5_F1+c6_F1+c7_F1)/7;

Guess you like

Origin blog.csdn.net/weixin_49216787/article/details/130110094
Recommended