Nonlinear system modeling of BP neural network (matlab implementation)

 Get the complete code of this blog:

MATLAB Neural Network 43 Case Studies-MATLAB & Simulink Books

1Case background

        In engineering applications, we often encounter some complex nonlinear systems. The state equations of these systems are complex and difficult to accurately model using mathematical methods. In this case, BP neural network can be established to express these nonlinear systems. The method regards the unknown system as a black box, first uses the system input and output data to train the BP neural network so that the network can express the unknown function, and then uses the trained BP neural network to predict the system output.
        The nonlinear function fitted in this chapter is

        The image of this function is shown in Figure 2-1:

Figure 2-1 Function image

2 Model establishment

        The nonlinear function fitting algorithm process based on BP neural network can be divided into three steps: BP neural network construction, BP neural network training and BP neural network prediction, as shown in Figure 2-2.


        BP neural network construction determines the BP neural network structure based on the characteristics of the fitted nonlinear function. Since the nonlinear function has two input parameters and one output parameter, the BP neural network structure is 2--5-1, that is, the input layer has 2 There are 5 nodes in the hidden layer and 1 node in the output layer.
        BP neural network training uses nonlinear function input and output data to train the neural network, so that the trained network can predict the nonlinear function output. 2000 sets of input and output data were randomly obtained from the nonlinear function, 1900 sets were randomly selected as training data for network training, and 100 sets were used as test data to test the fitting performance of the network.
        Neural network prediction uses the trained network prediction function to output, and the prediction results are analyzed.

3 MATLAB implementation

        According to the BP neural network theory, MATLAB software programming is used to implement the nonlinear fitting algorithm based on BP neural network.

3.1 BP neural network toolbox function

        The MATLAB software includes the MATLAB Neural Network Toolbox. It is based on the theory of artificial neural networks, and uses MATLAB language to construct most of the subroutines involved in the theory such as formula operations, matrix operations, and equation solving for the design and training of neural networks. Users only need to call relevant subroutines according to their needs to complete a series of tasks including network structure design, weight initialization, network training and result output, etc., eliminating the trouble of writing complex and huge programs. At present, the MATLAB neural network toolbox includes perceptrons, linear networks, BP neural networks, radial basis networks, self-organizing networks and regression networks. BP neural network mainly uses three neural network functions: newff, sim and train. Each function is explained as follows.

3.1.1. newff: BP neural network parameter setting function Function: Construct a BP neural network.

Function form:

net = newff(P,T,S,TF,BTF,BLF,PF,IPF,OPF,DDF)P: input data matrix.
T: Output data matrix. S: Number of hidden layer nodes.
TF: node transfer function, including hard limiting transfer function hardlim, symmetric hard limiting transfer function hardlims, linear transfer function purelin, tangent sigmoid transfer function tansig, logarithmic sigmoid transfer function logsig. BTF: training function, including gradient
descent BP algorithm training function traingd, gradient descent BP algorithm training function traingdm for momentum back propagation, gradient descent BP algorithm training function traingda for dynamic adaptive learning rate, gradient descent BP algorithm training function traingdx for momentum back propagation and dynamic adaptive learning rate. Levenberg_Marquardt's BP algorithm training function trainlm.
BLF: Network learning function, including BP learning rule learngd and BP learning rule learngdm with momentum items.

PF: Performance analysis function, including mean absolute error performance analysis function mae, mean square error performance analysis function mse.

IPF: input processing function.
OPF: Output processing function.
DDF: Verification data partition function.
Generally, the first 6 parameters are set during use, and the last 4 parameters adopt the system default parameters.

3.1.2.train: BP neural network training function

Function: Use training data to train BP neural network.
Function form: [net,tr] =train(NET,X,T,Pi,Ai)NET: The network to be trained.
X: Input data matrix.

T: Output data matrix.
Pi: Initialize input layer conditions.

Ai: Initialize output layer conditions.

net: trained network.

tr: Training process record.
Generally, the first three parameters are set during use, and the last two parameters adopt the system default parameters.

3.1.3. sim: BP neural network prediction function

Function function: Use the trained BP neural network to predict function output.

Function form: y=sim(net,x)
net: trained network.
x: input data.
y: network prediction data.

3.1.4 Complete matlab code

        First, randomly obtain 2000 sets of input and output data of the function based on the nonlinear function equation, and store the data in the data.mat file. input is the function input data, and output is the function output data. 1900 sets of data are randomly selected from the input and output data as network training data, and 100 sets of data are used as network test data, and the training data are normalized.

        Secondly, use the training data to train the BP neural network so that the network has the ability to predict the output of nonlinear functions.

        Finally, use the trained BP neural network to predict the nonlinear function output, and predict the output and expected output through the neural network

Analyze the fitting ability of BP neural network.
%% 该代码为基于BP神经网络的预测算法
%% 清空环境变量
clc
clear

%% 训练数据预测数据提取及归一化
%下载输入输出数据
load data input output

%从1到2000间随机排序
k=rand(1,2000);
[m,n]=sort(k);

%找出训练数据和预测数据
input_train=input(n(1:1900),:)';
output_train=output(n(1:1900));
input_test=input(n(1901:2000),:)';
output_test=output(n(1901:2000));

%选连样本输入输出数据归一化
[inputn,inputps]=mapminmax(input_train);
[outputn,outputps]=mapminmax(output_train);

%% BP网络训练
% %初始化网络结构
net=newff(inputn,outputn,5);

net.trainParam.epochs=100;
net.trainParam.lr=0.1;
net.trainParam.goal=0.00004;

%网络训练
net=train(net,inputn,outputn);

%% BP网络预测
%预测数据归一化
inputn_test=mapminmax('apply',input_test,inputps);
 
%网络预测输出
an=sim(net,inputn_test);
 
%网络输出反归一化
BPoutput=mapminmax('reverse',an,outputps);

%% 结果分析

figure(1)
plot(BPoutput,':og')
hold on
plot(output_test,'-*');
legend('预测输出','期望输出')
title('BP网络预测输出','fontsize',12)
ylabel('函数输出','fontsize',12)
xlabel('样本','fontsize',12)
%预测误差
error=BPoutput-output_test;


figure(2)
plot(error,'-*')
title('BP网络预测误差','fontsize',12)
ylabel('误差','fontsize',12)
xlabel('样本','fontsize',12)

figure(3)
plot((output_test-BPoutput)./BPoutput,'-*');
title('神经网络预测误差百分比')

errorsum=sum(abs(error));

4Result analysis

        Use the trained BP neural network to predict the function output. The prediction results, the error between the BP neural network prediction output and the expected output are shown in the figure below .

        As can be seen from the above figure, although the BP neural network has high fitting ability, the network prediction results still have certain errors, and the prediction errors of some sample points are large. In the following cases, the BP neural network optimization algorithm will be discussed in order to obtain better prediction results.

5 case extensions

5.1 Multi-hidden layer BP neural network

        BP neural network consists of input layer, hidden layer and output layer. The hidden layer can be divided into single hidden layer and multiple hidden layer according to the number of layers. Multi-hidden layer consists of multiple single hidden layers. Compared with single hidden layer, multi-hidden layer has strong generalization ability and high prediction accuracy, but the training time is longer. The choice of the number of hidden layers should be comprehensively considered in terms of network accuracy and training time. For simpler mapping relationships, when the network accuracy meets the requirements, you can choose a single hidden layer to speed up the process; for complex mappings, relationship, you can choose multiple hidden layers in order to improve the prediction accuracy of the network.
        The newff function in the MATLAB neural network toolbox can easily build a BP neural network containing multiple hidden layers. Its calling function is as follows:

net = newff(inputn,outputn,[5,5]);
        Compare the performance of single hidden layer BP neural network and double hidden layer BP neural network in terms of running time and prediction accuracy. The network structure is the same, the training iterations are 100 times, and the average of the 10 prediction results is compared. The comparison results are as follows Listed in Table 2-1.

 

        It can be seen from Table 2-1 that compared with the single hidden layer BP neural network, the prediction accuracy of the double hidden layer BP neural network is improved, but the running time is increased.

5.2 Number of hidden layer nodes

        When constructing BP neural network, attention should be paid to the selection of the number of nodes in the hidden layer. If the number of nodes in the hidden layer is too few, the BP neural network cannot establish complex mapping relationships, and the network prediction error will be large. However, if there are too many nodes, the network learning time will increase, and the "over-fitting" phenomenon may occur, that is, the training sample prediction is accurate, but the prediction error of other samples is larger. The prediction errors of BP neural network with different numbers of hidden layer nodes are listed in Table 2-2.

        Since the nonlinear function fitted in this case is relatively simple, the BP neural network prediction error continues to decrease as the number of nodes increases. However, for complex problems, the network prediction error generally shows a trend of first decreasing and then increasing as the number of nodes increases. .

5.3 Impact of training data on prediction accuracy

        The accuracy of neural network prediction is closely related to the amount of training data. Especially for a network with multiple inputs and multiple outputs, if there is a lack of enough network training data, the network prediction value may have large errors.
        The author once did an example of BP neural network prediction, which predicted the experimental results by establishing a 4-input, 5-output BP neural network. The network training data comes from real experiments. Due to the complexity of the experimental process, only 84 sets of data were obtained. 80 sets of data were selected as BP neural network training data, and the remaining 4 sets of data were used as test data. The BP neural network prediction results after training are as follows Listed in Table 2-3.
        As can be seen from Table 2-3, due to the lack of training data, the BP neural network has not been fully trained, and the error between the predicted value and the expected value of the BP neural network is large.
        The author once did a similar prediction problem. The purpose of this problem was to construct a 4-input and 4-output BP neural network prediction system output. The training data came from the model simulation results. Since the model can be simulated through software, multiple sets of data are obtained, and 1,500 sets of data are selected to train the network. Finally, the network prediction value is relatively close to the expected value.

5.4 Node transfer function

The newff function in the MATLAB neural network toolbox provides several node transfer functions, mainly including the following three.

        When the network structure, weights, and thresholds are the same, the relationship between the BP neural network prediction error, the mean square error, and the output layer node transfer function is as listed in Table 2-4.

        It can be seen from Table 2-4 that the selection of hidden layer and output layer functions has a great impact on the prediction accuracy of BP neural network. Generally, the hidden layer node transfer function uses the logsig function or the tansig function, and the output layer node transfer function uses the tansig or purelin function.

5.5 Limitations of network fitting

        Although BP neural network has good fitting ability, its fitting ability is not absolute. For some complex systems, BP neural network prediction results will have large errors. For example, for

        Its function graph is shown in Figure 2-6.

        Randomly select 2000 sets of input and output data of this function, take 1900 sets of data from them to train the network, and 100 sets of data to test the network fitting ability. A single hidden layer BP neural network is used, with a network structure of 2-5-1. After 100 times of network training, the prediction function outputs. The prediction results are shown in Figure 2-7. As can be seen from Figure 2-7, for complex nonlinear systems, the prediction error of BP neural network is relatively large. This example illustrates the limitations of the fitting ability of BP neural network.

Guess you like

Origin blog.csdn.net/weixin_44209907/article/details/131787697