Classification prediction | Matlab implements classification prediction based on BP-Adaboost data

Classification prediction | Matlab implements classification prediction based on BP-Adaboost data

Effect list

Insert image description here
Insert image description here
Insert image description here
Insert image description here

basic introduction

1. Matlab implements data classification prediction based on BP-Adaboost (Matlab complete program and data)
2. Multi-feature input model can be used by directly replacing the data.
3. The language is matlab. Classification effect diagram, confusion matrix diagram.
4. Classification effect diagram, confusion matrix diagram.
5.Data classification prediction of BP-Adaboost.
The operating environment is matlab2018 and above.

research content

BP-AdaBoost is a method that combines two machine learning technologies, BP and AdaBoost, to improve the performance and robustness of the model. Specifically, AdaBoost is an ensemble learning method that combines multiple weak classifiers to form a strong classifier, where each classifier is trained for different data sets and feature representations. The basic idea of ​​the BP-AdaBoost algorithm is to use BP as the base model and use the AdaBoost algorithm to enhance it. Specifically, we can train multiple BP models, each using different datasets and feature representations, and then combine their predictions to form a more accurate and robust model.

programming

  • For the complete program and data download method, please send a private message to the blogger to reply to Matlab to implement classification prediction based on BP-Adaboost data .
%%  数据归一化
[p_train, ps_input] = mapminmax(P_train, 0, 1);
p_test = mapminmax('apply', P_test, ps_input );
t_train = T_train;
t_test  = T_test ;

%%  特征选择
k = 9;        % 保留特征个数
[save_index, mic] = mic_select(p_train, t_train, k);

%%  输出选择特征的对应序号
disp('经过特征选择后,保留9个特征的序号为:')
disp(save_index')

%%  特征重要性
figure
bar(mic)
xlabel('输入特征序号')
ylabel('最大互信息系数')

%%  特征选择后的数据集
p_train = p_train(save_index, :);
p_test  = p_test (save_index, :);

%%  输出编码
t_train = ind2vec(t_train);
t_test  = ind2vec(t_test );

%%  创建网络
net = newff(p_train, t_train, 5);

%%  设置训练参数
net.trainParam.epochs = 1000;  % 最大迭代次数
net.trainParam.goal = 1e-6;    % 误差阈值
net.trainParam.lr = 0.01;      % 学习率

%%  训练网络
net = train(net, p_train, t_train);



%%  数据反归一化
T_sim1 = vec2ind(t_sim1);
T_sim2 = vec2ind(t_sim2);

%%  性能评价
error1 = sum((T_sim1 == T_train)) / M * 100 ;
error2 = sum((T_sim2 == T_test )) / N * 100 ;

%%  绘图
figure
plot(1: M, T_train, 'r-*', 1: M, T_sim1, 'b-o', 'LineWidth', 1)
legend('真实值', '预测值')
xlabel('预测样本')
ylabel('预测结果')
string = {
    
    '训练集预测结果对比'; ['准确率=' num2str(error1) '%']};
title(string)
grid

figure
plot(1: N, T_test, 'r-*', 1: N, T_sim2, 'b-o', 'LineWidth', 1)
legend('真实值', '预测值')
xlabel('预测样本')
ylabel('预测结果')
string = {
    
    '测试集预测结果对比'; ['准确率=' num2str(error2) '%']};
title(string)
grid

References

[1] https://blog.csdn.net/kjm13182345320/article/details/128163536?spm=1001.2014.3001.5502
[2] https://blog.csdn.net/kjm13182345320/article/details/128151206?spm=1001.2014.3001.5502

Guess you like

Origin blog.csdn.net/kjm13182345320/article/details/132843415