Electric load forecasting based on Elman neural network (with source code)

1. Algorithm principle

    The Elman neural network is a typical dynamic recursive neural network. It is based on the basic structure of the BP network and adds an undertaking layer to the hidden layer as a one-step delay operator to achieve the purpose of memory, so that the system has the ability to adapt The ability of time-varying characteristics enhances the global stability of the network. It has stronger computing power than the feed-forward neural network and can also be used to solve fast optimization problems. The structure of ELMAN is divided into four layers of neurons: input layer, hidden layer, succession layer and output layer. Figure 1 is a structural diagram of the Elman neural network. The following is a brief description of the structure of the Elman neural network.

Figure 1 Elman neural network structure diagram

(1) Input layer and output layer

     The number of neurons in the input layer is equal to the dimensionality of the input data features, and the number of neuron nodes in the output layer is also equal to the dimensionality of the output data labels.

(2) hidden layer

    No matter in BP or ELMAN, or other neural networks, the number of neurons in the hidden layer is not fixed. If the number of hidden layer neurons selected is small, the learning degree of the network will be reduced or even unable to learn. When the number of nodes is large, the process of network training will slow down, and it is difficult to obtain the expected situation. Only when the number of neurons in the hidden layer is controlled within a reasonable range, can the network model perform learning operations well.

(3) Undertaking layer

    The succession layer is also called the context layer and the state layer, and its main function is to memorize the output value at a time point on the hidden layer. Therefore, the number of neurons in the successor layer is the same as that in the hidden layer. The method of determination is: first determine the best neuron node in the hidden layer according to the minimum training error, and then obtain the number of neuron nodes in the successor layer.

2. Code combat

    Take electricity load forecasting as an example.

%总共10天的数据,选取前面9天的数据作为神经网络的训练样本,每3天的负荷作为输入向量,
%第四天的负荷作为目标向量,这样可以得到6组训练样本。第10天的数据作为网络的测试样本
%验证网络能否准确预测当天的负荷数据
clear all
clc
% close all
% nntwarn off;
%导入题中所给样本数据
a=[0.37 0.51 0.71;...
    0.12 0.17 0.88;...
    0.32 0.99 0.69;...
    0.13 0.55 0.63;...
    0.11 0.42 0.84;...
    0.24 0.53 0.71;...
    0.33 0.44 0.9;...
    0.19 0.66 0.44;...
    0.31 0.67 0.49;...
    0.37 0.51 0.71];

%%%%%%%%%%选取训练数据和测试数据%%%%%%%%%%%%%
for i=1:7
    p(i,:)=[a(i,:),a(i+1,:),a(i+2,:)];
end
% 训练数据输入
p_train=p(1:6,:);
% 训练数据输出
t_train=a(4:9,:);
% 测试数据输入
p_test=p(7,:);
% 测试数据输出
t_test=a(10,:);

%为适应网络结构,对数据做转置处理
p_train=p_train';
t_train=t_train';
p_test=p_test';


%%%%%%%%%%%网络的建立和训练%%%%%%%%%%%%%%%%%
%设置不同的隐藏层神经元个数
nn=[5 10 15 20];
for i=1:4
    threshold=[0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1];
    %建立Elman神经网络 隐藏层为nn(i)个神经元
    net=newelm(threshold,[nn(i),3],{'tansig','purelin'});
    %设置网络训练参数
    net.trainparam.epochs=2000;
    net.trainparam.show=200;
    %初始化网络
    net=init(net);
    %Elman网络训练
    net=train(net,p_train,t_train);
    % 预测数据
    y(i,:)=sim(net,p_test);
    % 计算绝对误差
    error(i,:)=y(i,:)-t_test;
end

%观察网络在不同隐藏层神经元个数时的预测绝对误差
figure(1)
plot(1:3,error(1,:),'-ro','linewidth',2);
hold on;
plot(1:3,error(2,:),'b:x','linewidth',2);
hold on;
plot(1:3,error(3,:),'k-.s','linewidth',2);
hold on;
plot(1:3,error(4,:),'c--d','linewidth',2);
title('Elman预测误差图')
set(gca,'Xtick',[1:3])
legend('5','10','15','20','location','best')
xlabel('预测时间点')
ylabel('不同神经元网络预测的绝对误差')
hold off;
figure(2)
plot(1:3,y(1,:),'-ro','linewidth',2);
hold on;
plot(1:3,y(2,:),'b:x','linewidth',2);
hold on;
plot(1:3,y(3,:),'k-.s','linewidth',2);
hold on;
plot(1:3,y(4,:),'c--d','linewidth',2);
title('Elman预测结果图')
set(gca,'Xtick',[1:3])
legend('5','10','15','20','location','best')
xlabel('预测时间点')
ylabel('不同神经元网络预测的绝对误差')
hold off;

%计算相对误差
for i=1:4
    average(i)=(abs(error(i,1))+abs(error(i,2))+abs(error(i,3)))/3;
end

%绘制不同神经元个数时网络预测一天3个时间点的平均误差
figure(3)
plot(5:5:20,average,'k-.s','linewidth',2)
xlabel('建立网络的神经元个数')
ylabel('不同神经元网络预测的平均误差')
hold off;

 

 

Guess you like

Origin blog.csdn.net/qq_45013535/article/details/131489697