Classification prediction | MATLAB implements WOA-CNN-BiGRU-Attention data classification prediction

Classification prediction | MATLAB implements WOA-CNN-BiGRU-Attention data classification prediction

classification effect

3
4

Basic description

1. Matlab implements WOA-CNN-BiGRU-Attention multi-feature classification prediction, multi-feature input model, and the operating environment is Matlab2023 and above; 2.
Optimize the learning rate, convolution kernel size, and the number of neurons through the WOA optimization algorithm, these three Key parameters, with the highest accuracy of the test set as the objective function;
3. Binary classification and multi-classification models with multi-feature input and single output. The program has detailed annotations and can be used by directly replacing the data;
the program language is matlab, and the program can produce classification effect diagrams, iterative optimization diagrams, and confusion matrix diagrams; evaluation indicators such as precision, recall rate, precision rate, and F1 score.
4. Data classification prediction program based on Whale Optimization Algorithm (WOA), Convolutional Neural Network (CNN) and Bidirectional Gated Recurrent Unit Network (BiGRU).
5. Applicable fields:
It is suitable for various data classification scenarios, such as identification, diagnosis and classification of rolling bearing faults, transformer oil and gas faults, power system transmission line fault areas, insulators, distribution networks, power quality disturbances, and other fields.
Easy to use:
Import data directly using EXCEL form, without greatly modifying the program. There are detailed notes inside, easy to understand.

Model description

CNN is a feed-forward neural network widely used in the field of deep learning. It is mainly composed of convolutional layer, pooling layer and fully connected layer. The input feature vector can be a multi-dimensional vector group, using local perception and weight sharing. . The convolutional layer extracts feature quantities from the original data and deeply mines the internal relationship of the data. The pooling layer can reduce network complexity and training parameters. The fully connected layer merges the processed data to calculate the classification and regression results.
GRU is an improved model of LSTM, which integrates the forget gate and input gate into a single update gate, and mixes the neuron state and hidden state at the same time, which can effectively alleviate the problem of "gradient disappearance" in the recurrent neural network, and can be used in Reduce training parameters while maintaining training performance.

programming

  • Complete program and data acquisition method: Private letter bloggers reply to MATLAB to realize WOA-CNN-BiGRU-Attention data classification prediction ;
% The Whale Optimization Algorithm
function [Best_Cost,Best_pos,curve]=WOA(pop,Max_iter,lb,ub,dim,fobj)

% initialize position vector and score for the leader
Best_pos=zeros(1,dim);
Best_Cost=inf; %change this to -inf for maximization problems


%Initialize the positions of search agents
Positions=initialization(pop,dim,ub,lb);

curve=zeros(1,Max_iter);

t=0;% Loop counter

% Main loop
while t<Max_iter
    for i=1:size(Positions,1)
        
        % Return back the search agents that go beyond the boundaries of the search space
        Flag4ub=Positions(i,:)>ub;
        Flag4lb=Positions(i,:)<lb;
        Positions(i,:)=(Positions(i,:).*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb;
        
        % Calculate objective function for each search agent
        fitness=fobj(Positions(i,:));
        
        % Update the leader
        if fitness<Best_Cost % Change this to > for maximization problem
            Best_Cost=fitness; % Update alpha
            Best_pos=Positions(i,:);
        end
        
    end
    
    a=2-t*((2)/Max_iter); % a decreases linearly fron 2 to 0 in Eq. (2.3)
    
    % a2 linearly dicreases from -1 to -2 to calculate t in Eq. (3.12)
    a2=-1+t*((-1)/Max_iter);
    
    % Update the Position of search agents 
    for i=1:size(Positions,1)
        r1=rand(); % r1 is a random number in [0,1]
        r2=rand(); % r2 is a random number in [0,1]
        
        A=2*a*r1-a;  % Eq. (2.3) in the paper
        C=2*r2;      % Eq. (2.4) in the paper
        
        
        b=1;               %  parameters in Eq. (2.5)
        l=(a2-1)*rand+1;   %  parameters in Eq. (2.5)
        
        p = rand();        % p in Eq. (2.6)
        
        for j=1:size(Positions,2)
            
            if p<0.5   
                if abs(A)>=1
                    rand_leader_index = floor(pop*rand()+1);
                    X_rand = Positions(rand_leader_index, :);
                    D_X_rand=abs(C*X_rand(j)-Positions(i,j)); % Eq. (2.7)
                    Positions(i,j)=X_rand(j)-A*D_X_rand;      % Eq. (2.8)
                    
                elseif abs(A)<1
                    D_Leader=abs(C*Best_pos(j)-Positions(i,j)); % Eq. (2.1)
                    Positions(i,j)=Best_pos(j)-A*D_Leader;      % Eq. (2.2)
                end
                
            elseif p>=0.5
              
                distance2Leader=abs(Best_pos(j)-Positions(i,j));
                % Eq. (2.5)
                Positions(i,j)=distance2Leader*exp(b.*l).*cos(l.*2*pi)+Best_pos(j);
                
            end
            
        end
    end
    t=t+1;
    curve(t)=Best_Cost;
    [t Best_Cost]
end

References

[1] https://blog.csdn.net/kjm13182345320/article/details/129036772?spm=1001.2014.3001.5502
[2] https://blog.csdn.net/kjm13182345320/article/details/128690229

Guess you like

Origin blog.csdn.net/kjm13182345320/article/details/132351938