[Lssvm prediction] lssvm prediction of whale optimization algorithm [Matlab 006]

1. Introduction

1.1 The basic principle of the
least squares support vector machine LSSVM The least squares support vector machine is an improvement of the support vector machine. It changes the inequality constraints in the traditional support vector machine to equality constraints, and changes the sum of squares of errors (SumSquaresError) The loss function is used as the empirical loss of the training set, so that the solution of the quadratic programming problem is transformed into the problem of solving the linear equations, and the speed of solving the problem and the accuracy of convergence are improved.
Common types of kernel functions:
Insert picture description here
1.2 How to use LSSVM toolbox
1.2.1 Least squares support vector machine Matlab toolbox download link: https://www.esat.kuleuven.be/sista/lssvmlab/
1.2.2 LS- The SVM file is added to the matlan use path and can be used directly.

1.3 Specific steps:
1 Import training data: load reads mat files and ASCII files; xlsread reads .xls files; csvread reads .csv files.
2 Data preprocessing: The effect is to speed up training.
The methods are: normalization processing (turn each group of data into a number between-1 and +1, the functions involved are premnmx, post mnmx, tramnmx)
standardization processing (turn each group of data into a mean value 0, a set of data with a variance of 1, the functions involved are prestd, poatstd, trastd)
principal component analysis (orthogonal processing, reducing the dimensionality of the input data, the functions involved are prepca, trapca)
3 LS -SVM lab is used for functional regression mainly uses 3 functions, trainlssvm function is used to train and build the model, simlssvm function is used to predict the model, plotlssvm function is a special drawing function of LS-SVM lab toolbox.
4 Parameter description:
A =csvread('traindata. csv');
Ptrain0=A(:, [1:13] ); Ttrain0=A(:, [14:16);
[Ptrain, meanptrain, stdptrain] = prestd( Ptrain0′);
[Ttrain, meant, stdt] = prestd(T train0′);
Prestd() is the data normalization function, where meanptrain is the vector average before the unnormalized data stdptrain is before the unnormalized data The standard deviation of the vector.
gam = 10; sig2 = 0.5; type ='function estimation';
LS-SVM requires only two parameters to be adjusted. gam and sig2 are the parameters of the least squares support vector machine, where gam is the regularization parameter, which determines the degree of minimization and smoothness of the adaptation error, and sig2 is the parameter of the RBF function. There is a function gridsearch in the toolbox that can be used to find the optimal parameter range within a certain range. There are two types of type, one is classfication, used for classification, and the other is function estimation, used for function regression.
[alpha, b] =trainlssvm({Ptrain′, Ttrain′, type, gam, sig2,′RBF_kernel′, ′preprocess′});
alpha is the support vector, b is the threshold.. Preprocess indicates that the data has been normalized, or it can be'original', which indicates that the data has not been normalized, and the default is'preprocess'.
plotlssvm ({P, T, type, gam, sig2, ‘RBF _kernel’,’preprocess’}, {alpha, b}) The plotlssvm function is a unique drawing function of the LS-SVM toolbox, and the principle is similar to the plot function.
The simlssvm function is also an important function of the LS-SVM toolbox. The parameters are as shown above, and the principle is similar to the sim function in the neural network toolbox.
By calling the trainlssvm function and the si m lssvm function, we can see that the structure of the least squares support vector machine and the neural network have a lot in common.
Contrast with neural network:
The model built by neural network is better than LS-SVM, but in terms of estimation, LS-SVM is better than neural network, has better generalization ability, and the training speed is faster than neural network.

Three, source code

%=====================================================================
%初始化
clc
close all
clear
format long
tic
%==============================================================
%%导入数据
data=xlsread('数值.xlsx','Sheet1','A2:E41');%训练
data1=xlsread('数值.xlsx','Sheet1','G2:J31');%测试
[row,col]=size(data);
train_x=data(:,1:col-1);
train_y=data(:,col);
test_x=data(:,1:col-1);
% test_y=data(:,col);
 
train_x=train_x';
train_y=train_y';
test_x=test_x';
% test_y=test_y';
 
%%数据归一化
[train_x,minx,maxx, train_yy,miny,maxy] =premnmx(train_x,train_y);
test_x=tramnmx(test_x,minx,maxx);
train_x=train_x';
train_yy=train_yy';
train_y=train_y';
test_x=test_x';
% test_y=test_y';
%% 参数初始化
eps = 10^(-6);
%%定义lssvm相关参数
type='f';
kernel = 'RBF_kernel';
proprecess='proprecess';
lb=[0.01 0.02];%参数c、g的变化的下限
ub=[1000 100];%参数c、g的变化的上限
dim=2;%维度,即一个优化参数
SearchAgents_no=20; % Number of search agents
Max_iter=50; % Maximum numbef of iterations
% initialize position vector and score for the leader
Leader_pos=zeros(1,dim);
Leader_score=inf; %change this to -inf for maximization problems
%Initialize the positions of search agents
% Positions=initialization(SearchAgents_no,dim,ub,lb);
Positions(:,1)=ceil(rand(SearchAgents_no,1).*(ub(1)-lb(1))+lb(1));
Positions(:,2)=ceil(rand(SearchAgents_no,1).*(ub(2)-lb(2))+lb(2));
Convergence_curve=zeros(1,Max_iter);
t=0;% Loop counter
% Main loop
 
woa1;
%% 结果分析
plot( Convergence_curve,'LineWidth',2);
title(['鲸鱼优化算法适应度曲线','(参数c1=',num2str(Leader_pos(1)),',c2=',num2str(Leader_pos(2)),',终止代数=',num2str(Max_iter),')'],'FontSize',13);
xlabel('进化代数');ylabel('误差适应度');
 
bestc = Leader_pos(1);
bestg = Leader_pos(2);
 
gam=bestc;
sig2=bestg;
model=initlssvm(train_x,train_yy,type,gam,sig2,kernel,proprecess);%原来是显示
model=trainlssvm(model);%原来是显示
%求出训练集和测试集的预测值
[train_predict_y,zt,model]=simlssvm(model,train_x);
[test_predict_y,zt,model]=simlssvm(model,test_x);
 
%预测数据反归一化
train_predict=postmnmx(train_predict_y,miny,maxy);%预测输出
test_predict=postmnmx(test_predict_y,miny,maxy);
figure
plot(train_predict,':og')
hold on
plot(train_y,'- *')
legend('预测输出','期望输出')
title('鲸鱼优化svm网络预测输出','fontsize',12)
ylabel('函数输出','fontsize',12)
xlabel('样本','fontsize',12)
disp(['预测输出'])
  YPred_best
toc   %计算时间
 

Four, running results

Insert picture description here
Insert picture description here
Note: complete code or writing add QQ912100926 past review
>>>>>>
[prediction model] lssvm prediction model of particle swarm [Matlab 005]

Guess you like

Origin blog.csdn.net/m0_54742769/article/details/112918699