snntorch_P3: Spiking Neural Networks vs. Other Classical Algorithms

Hello everyone, it has not been updated for nearly a year, and now it is the second year of research. This year, I have done horizontal work, worked on algorithms, and learned java. I am also a little confused about the future, and it is really difficult to get a job. Method. What we can do is to continue to improve our growth capabilities, so let's get down to business!

My research direction is aircraft variant control based on spiking neural network. Specifically, it refers to the change of aerodynamic characteristics caused by changes in altitude, speed, angle of attack, etc. during the high-altitude flight of the aircraft. What I want to do is to Select an appropriate deformation rate of the aircraft in a flight state (the choice is to change the sweep angle). If we choose the height, speed, and angle of attack as the input, and the deformation rate is the output, we can throw several data into the neural network training. Because of the deformation rate, I read some papers and consider the difficulty of data set construction. The deformation rate It is set to a discrete value, so in the final analysis it is a classification problem .

What everyone clicks on should be related to the research direction and the spiking neural network. One of the biggest advantages of the spiking neural network is that the energy consumption of the pulse sequence is low. So compared with whom, how much is it lower? So my current idea is to use several classic network algorithms to realize it, including BP (using genetic algorithm to optimize the initial weight), SVM, RBF, decision tree, CNN, and then compare it with the results of the pulse neural network . Under the condition of ensuring the excellent performance of the spiking neural network, it is a current idea to quantify how much energy consumption can be reduced.

The current progress is that the data set has been constructed and realized with the BP neural network optimized by the genetic algorithm and the support vector machine (SVM). The next step is RBF, decision tree, CNN, and the adjustment of the pulse neural network. , and will show how to quantify the energy consumption of computing neural networks .

Then this issue will show the implementation of data sets and BP and SVM.

The first is the data set. The data in each column are Mach number (Ma), speed (m/s), height (km), angle of attack (°), and deformation rate category (it can be understood as label, a total of 6 categories), If you are interested in this research, you can get in touch to get the data set! Currently only a very preliminary version.
insert image description here

insert image description here
Then there are two methods of BP and SVM. Refer to the video of the big guy at station B (UP host: Afei_Y). The big guy’s code is from matlab, which mainly uses the toolbox of machine learning. I haven’t used matlab for a long time. After doing machine learning, it is really easy to use and straightforward! The source code is attached.
BP neural network classification prediction based on genetic algorithm optimization:
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here

SVM classification prediction:
insert image description here
insert image description here
insert image description here
insert image description here
source code: (you can try it with your own data set, as a comparison of the results of your own research)
genetic algorithm optimization BP neural network:
(the genetic algorithm needs to call another package, you can go to github link https://github.com/Time9Y/Matlab-Machine to download the source code) (SVM can be run directly)

%%  清空环境变量
warning off             % 关闭报警信息
close all               % 关闭开启的图窗
clear                   % 清空变量
clc                     % 清空命令行

%%  导入数据
res = xlsread('a.xls');

%%  划分训练集和测试集
temp = randperm(200);

P_train = res(temp(1: 170), 1)';
T_train = res(temp(1: 170), 5)';
M = size(P_train, 2);

P_test = res(temp(170: end), 1)';
T_test = res(temp(170: end), 5)';
N = size(P_test, 2);

%%  数据归一化
[p_train, ps_input] = mapminmax(P_train, 0, 1);
p_test  = mapminmax('apply', P_test, ps_input);

t_train = ind2vec(T_train);
t_test  = ind2vec(T_test );

%%  建立模型
S1 = 5;           %  隐藏层节点个数                
net = newff(p_train, t_train, S1);

%%  设置参数
net.trainParam.epochs = 1000;        % 最大迭代次数 
net.trainParam.goal   = 1e-6;        % 设置误差阈值
net.trainParam.lr     = 0.01;        % 学习率

%%  设置优化参数
gen = 50;                       % 遗传代数
pop_num = 5;                    % 种群规模
S = size(p_train, 1) * S1 + S1 * size(t_train, 1) + S1 + size(t_train, 1);
                                % 优化参数个数
bounds = ones(S, 1) * [-1, 1];  % 优化变量边界

%%  初始化种群
prec = [1e-6, 1];               % epslin 为1e-6, 实数编码
normGeomSelect = 0.09;          % 选择函数的参数
arithXover = 2;                 % 交叉函数的参数
nonUnifMutation = [2 gen 3];    % 变异函数的参数

initPop = initializega(pop_num, bounds, 'gabpEval', [], prec);  

%%  优化算法
[Bestpop, endPop, bPop, trace] = ga(bounds, 'gabpEval', [], initPop, [prec, 0], 'maxGenTerm', gen,...
                           'normGeomSelect', normGeomSelect, 'arithXover', arithXover, ...
                           'nonUnifMutation', nonUnifMutation);

%%  获取最优参数
[val, W1, B1, W2, B2] = gadecod(Bestpop);

%%  参数赋值
net.IW{1, 1} = W1;
net.LW{2, 1} = W2;
net.b{1}     = B1;
net.b{2}     = B2;

%%  模型训练
net.trainParam.showWindow = 1;       % 打开训练窗口
net = train(net, p_train, t_train);  % 训练模型

%%  仿真测试
t_sim1 = sim(net, p_train);
t_sim2 = sim(net, p_test );

%%  数据反归一化
T_sim1 = vec2ind(t_sim1);
T_sim2 = vec2ind(t_sim2);

%%  性能评价
error1 = sum((T_sim1 == T_train)) / M * 100 ;
error2 = sum((T_sim2 == T_test )) / N * 100 ;

%%  数据排序
[T_train, index_1] = sort(T_train);
[T_test , index_2] = sort(T_test );

T_sim1 = T_sim1(index_1);
T_sim2 = T_sim2(index_2);

%%  优化迭代曲线
figure
plot(trace(:, 1), 1 ./ trace(:, 2), 'LineWidth', 1.5);
xlabel('迭代次数');
ylabel('适应度值');
string = {'适应度变化曲线'};
title(string)
grid on

%%  绘图
figure
plot(1: M, T_train, 'r-*', 1: M, T_sim1, 'b-o', 'LineWidth', 1)
legend('真实值', '预测值')
xlabel('预测样本')
ylabel('预测结果')
string = {'训练集预测结果对比'; ['准确率=' num2str(error1) '%']};
title(string)
grid

figure
plot(1: N, T_test, 'r-*', 1: N, T_sim2, 'b-o', 'LineWidth', 1)
legend('真实值', '预测值')
xlabel('预测样本')
ylabel('预测结果')
string = {'测试集预测结果对比'; ['准确率=' num2str(error2) '%']};
title(string)
grid

%%  混淆矩阵
figure
cm = confusionchart(T_train, T_sim1);
cm.Title = 'Confusion Matrix for Train Data';
cm.ColumnSummary = 'column-normalized';
cm.RowSummary = 'row-normalized';
    
figure
cm = confusionchart(T_test, T_sim2);
cm.Title = 'Confusion Matrix for Test Data';
cm.ColumnSummary = 'column-normalized';
cm.RowSummary = 'row-normalized';

SVM:

%%  清空环境变量
warning off             % 关闭报警信息
close all               % 关闭开启的图窗
clear                   % 清空变量
clc                     % 清空命令行

%%  导入数据
res = xlsread('a.xls');

%%  划分训练集和测试集
temp = randperm(200);

P_train = res(temp(1: 170), 1)';
T_train = res(temp(1: 170), 5)';
M = size(P_train, 2);

P_test = res(temp(170: end), 1)';
T_test = res(temp(170: end), 5)';
N = size(P_test, 2);

%%  数据归一化
[p_train, ps_input] = mapminmax(P_train, 0, 1);
p_test = mapminmax('apply', P_test, ps_input );
t_train = T_train;
t_test  = T_test ;

%%  转置以适应模型
p_train = p_train'; p_test = p_test';
t_train = t_train'; t_test = t_test';

%%  创建模型
c = 10.0;      % 惩罚因子
g = 0.01;      % 径向基函数参数
cmd = ['-t 2', '-c', num2str(c), '-g', num2str(g)];
model = svmtrain(t_train, p_train, cmd);

%%  仿真测试
T_sim1 = svmpredict(t_train, p_train, model);
T_sim2 = svmpredict(t_test , p_test , model);

%%  性能评价
error1 = sum((T_sim1' == T_train)) / M * 100;
error2 = sum((T_sim2' == T_test )) / N * 100;

%%  数据排序
[T_train, index_1] = sort(T_train);
[T_test , index_2] = sort(T_test );

T_sim1 = T_sim1(index_1);
T_sim2 = T_sim2(index_2);

%%  绘图
figure
plot(1: M, T_train, 'r-*', 1: M, T_sim1, 'b-o', 'LineWidth', 1)
legend('真实值', '预测值')
xlabel('预测样本')
ylabel('预测结果')
string = {'训练集预测结果对比'; ['准确率=' num2str(error1) '%']};
title(string)
grid

figure
plot(1: N, T_test, 'r-*', 1: N, T_sim2, 'b-o', 'LineWidth', 1)
legend('真实值', '预测值')
xlabel('预测样本')
ylabel('预测结果')
string = {'测试集预测结果对比'; ['准确率=' num2str(error2) '%']};
title(string)
grid

%%  混淆矩阵
figure
cm = confusionchart(T_train, T_sim1);
cm.Title = 'Confusion Matrix for Train Data';
cm.ColumnSummary = 'column-normalized';
cm.RowSummary = 'row-normalized';
    
figure
cm = confusionchart(T_test, T_sim2);
cm.Title = 'Confusion Matrix for Test Data';
cm.ColumnSummary = 'column-normalized';
cm.RowSummary = 'row-normalized';

Guess you like

Origin blog.csdn.net/cyy0789/article/details/127440891