Classification prediction | MATLAB implements WOA-CNN whale algorithm to optimize convolutional neural network data classification prediction

Classification prediction | MATLAB implements WOA-CNN-LSTM whale algorithm to optimize convolutional long short-term memory network data classification prediction

classification effect

1
2
3
4
5

Basic description

1. Matlab implements WOA-CNN multi-feature classification prediction, multi-feature input model, and the operating environment is Matlab2018b and above;
2. Based on the Whale Algorithm (WOA) to optimize the convolutional neural network (CNN) classification prediction, the optimization parameters are learning rate, batch Processing, regularization parameters;
3. Binary classification and multi-classification models with multi-feature input and single output. The comments in the program are detailed, and it can be used by directly replacing the data;
the program language is matlab, and the program can produce classification effect diagrams, iterative optimization diagrams, and confusion matrix diagrams;
4. data is a data set, input 12 features, divided into four categories; main is The main program, and the rest are function files, which do not need to be run, and the data and program content can be obtained in the download area.

programming

  • Complete program and data acquisition method 1: Private message bloggers, program exchange of the same value;

  • Complete program and data download method 2 (download directly from the resource): MATLAB implements WOA-CNN whale algorithm to optimize convolutional neural network data classification prediction

  • Complete program and data download method 3 (subscribe to the "Smart Learning" column, and at the same time get 2 copies of the program included in the "Smart Learning" column, private message me to get the data after subscription): MATLAB implements WOA-CNN whale algorithm to optimize convolutional neural network data classification prediction

%%  优化算法参数设置
SearchAgents_no = 3;                  % 数量
Max_iteration = 5;                    % 最大迭代次数
dim = 3;                              % 优化参数个数

 
%% 建立模型
lgraph = [
 
 convolution2dLayer([1, 1], 32)  % 卷积核大小 3*1 生成32张特征图
 batchNormalizationLayer         % 批归一化层
 reluLayer                       % Relu激活层

 dropoutLayer(0.2)               % Dropout层
 fullyConnectedLayer(num_class, "Name", "fc")                     % 全连接层
 softmaxLayer("Name", "softmax")                                  % softmax激活层
 classificationLayer("Name", "classification")];                  % 分类层




%% 参数设置
options = trainingOptions('adam', ...     % Adam 梯度下降算法
    'MaxEpochs', 10,...                 % 最大训练次数 
    'MiniBatchSize',best_hd, ...
    'InitialLearnRate', best_lr,...          % 初始学习率为0.001
    'L2Regularization', best_l2,...         % L2正则化参数
    'LearnRateSchedule', 'piecewise',...  % 学习率下降
    'LearnRateDropFactor', 0.1,...        % 学习率下降因子 0.1
    'LearnRateDropPeriod', 400,...        % 经过800次训练后 学习率
%% 训练
net = trainNetwork(p_train, t_train, lgraph, options);

%% 预测
t_sim1 = predict(net, p_train); 
t_sim2 = predict(net, p_test ); 
%_________________________________________________________________________%
%  Whale Optimization Algorithm (WOA) source codes demo 1.0               
% The Whale Optimization Algorithm
function [Best_Cost,Best_pos,curve]=WOA(pop,Max_iter,lb,ub,dim,fobj)

% initialize position vector and score for the leader
Best_pos=zeros(1,dim);
Best_Cost=inf; %change this to -inf for maximization problems


%Initialize the positions of search agents
Positions=initialization(pop,dim,ub,lb);

curve=zeros(1,Max_iter);

t=0;% Loop counter

% Main loop
while t<Max_iter
    for i=1:size(Positions,1)
        
        % Return back the search agents that go beyond the boundaries of the search space
        Flag4ub=Positions(i,:)>ub;
        Flag4lb=Positions(i,:)<lb;
        Positions(i,:)=(Positions(i,:).*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb;
        
        % Calculate objective function for each search agent
        fitness=fobj(Positions(i,:));
        
        % Update the leader
        if fitness<Best_Cost % Change this to > for maximization problem
            Best_Cost=fitness; % Update alpha
            Best_pos=Positions(i,:);
        end
        
    end
    
    a=2-t*((2)/Max_iter); % a decreases linearly fron 2 to 0 in Eq. (2.3)
    
    % a2 linearly dicreases from -1 to -2 to calculate t in Eq. (3.12)
    a2=-1+t*((-1)/Max_iter);
    
    % Update the Position of search agents 
    for i=1:size(Positions,1)
        r1=rand(); % r1 is a random number in [0,1]
        r2=rand(); % r2 is a random number in [0,1]
        
        A=2*a*r1-a;  % Eq. (2.3) in the paper
        C=2*r2;      % Eq. (2.4) in the paper
        
        
        b=1;               %  parameters in Eq. (2.5)
        l=(a2-1)*rand+1;   %  parameters in Eq. (2.5)
        
        p = rand();        % p in Eq. (2.6)
        
        for j=1:size(Positions,2)
            
            if p<0.5   
                if abs(A)>=1
                    rand_leader_index = floor(pop*rand()+1);
                    X_rand = Positions(rand_leader_index, :);
                    D_X_rand=abs(C*X_rand(j)-Positions(i,j)); % Eq. (2.7)
                    Positions(i,j)=X_rand(j)-A*D_X_rand;      % Eq. (2.8)
                    
                elseif abs(A)<1
                    D_Leader=abs(C*Best_pos(j)-Positions(i,j)); % Eq. (2.1)
                    Positions(i,j)=Best_pos(j)-A*D_Leader;      % Eq. (2.2)
                end
                
            elseif p>=0.5
              
                distance2Leader=abs(Best_pos(j)-Positions(i,j));
                % Eq. (2.5)
                Positions(i,j)=distance2Leader*exp(b.*l).*cos(l.*2*pi)+Best_pos(j);
                
            end
            
        end
    end
    t=t+1;
    curve(t)=Best_Cost;
    [t Best_Cost]
end

References

[1] https://blog.csdn.net/kjm13182345320/article/details/129036772?spm=1001.2014.3001.5502
[2] https://blog.csdn.net/kjm13182345320/article/details/128690229

Guess you like

Origin blog.csdn.net/kjm13182345320/article/details/130436097