Time series prediction | MATLAB realizes TCN-GRU time series convolution gated recurrent unit time series prediction

Time series prediction | MATLAB realizes TCN-GRU time series convolution gated recurrent unit time series prediction

predictive effect

insert image description here

insert image description here

insert image description here
insert image description here

insert image description here
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here

basic introduction

1. MATLAB implements time series prediction of TCN-GRU time convolution gated recurrent unit;
2. The operating environment is Matlab2021b;
3. Univariate time series prediction;
4. data is data set, excel data, univariate time series, MainTCN_BiGRUTS. m is the main program, just run it, and put all files in one folder;
5. The command window outputs multi-index evaluations of R2, MSE, RMSE, MAE, and MAPE; the
TCN model extracts past data through one-dimensional causal convolution, The timing is guaranteed, the residual connection speeds up the convergence speed, and the dilated convolution realizes the timing feature extraction. As a variant of the cyclic neural network, the BiGRU model has nonlinear fitting capabilities, can effectively extract data features, and obtain faster convergence speed while ensuring similar prediction results to LSTM. In this paper, the two are combined to build the TCN-BiGRU model.

Model description

Since TCN has an expanded causal convolution structure and has outstanding feature extraction capabilities, the original features can be fused to obtain high-dimensional abstract features, which strengthens the mining of feature information. The BiGRU
network has a powerful timing prediction capability. Combining the TCN and the BiGRU network, the feature extraction of the TCN is input to the BiGRU network, which improves the processing
efficiency relation. Therefore, this paper builds a TCN-BiGRU prediction model.

TCN-GRU is a neural network model that combines temporal convolutional neural network (TCN) and gated recurrent unit (GRU). TCN is a convolutional neural network capable of processing sequence data, which can capture long-term dependencies in sequences. GRU is a recurrent neural network with memory units that can handle both short-term and long-term dependencies in sequence data.
The input of the TCN-GRU model can be multiple sequences, and each sequence can be a different feature or variable. For example, if we want to predict the average temperature of a certain city for the next week, we can use multiple variables such as temperature, humidity, air pressure, etc. in the past period of time as an input sequence. The output of the model is a value, the average temperature at some point in the future.
In TCN-GRU, temporal convolutional layers are used to capture long-term dependencies in sequences, and GRU layers are used to handle short-term and long-term dependencies in sequences. Multiple input sequences are merged into one tensor, which is then fed into the TCN-GRU network for training. During training, the model optimization goal is to minimize the gap between the predicted output and the true value.
The TCN-GRU model performs well on time series forecasting problems, especially for long-term dependent sequence data. It can be used in many application scenarios, such as stock price prediction, traffic flow prediction, etc.

programming

  • Complete source code and data acquisition method: Private letter bloggers reply to MATLAB to realize TCN-GRU time convolution gated recurrent unit time series prediction ;
%% 预测
t_sim1 = predict(net, p_train); 
t_sim2 = predict(net, p_test ); 

%%  数据反归一化
T_sim1 = mapminmax('reverse', t_sim1, ps_output);
T_sim2 = mapminmax('reverse', t_sim2, ps_output);

%%  均方根误差
error1 = sqrt(sum((T_sim1' - T_train).^2) ./ M);
error2 = sqrt(sum((T_sim2' - T_test ).^2) ./ N);


%%  相关指标计算

%  MAE
mae1 = sum(abs(T_sim1' - T_train)) ./ M ;
mae2 = sum(abs(T_sim2' - T_test )) ./ N ;

disp(['训练集数据的MAE为:', num2str(mae1)])
disp(['测试集数据的MAE为:', num2str(mae2)])

%% 平均绝对百分比误差MAPE
MAPE1 = mean(abs((T_train - T_sim1')./T_train));
MAPE2 = mean(abs((T_test - T_sim2')./T_test));

disp(['训练集数据的MAPE为:', num2str(MAPE1)])
disp(['测试集数据的MAPE为:', num2str(MAPE2)])

%  MBE
mbe1 = sum(abs(T_sim1' - T_train)) ./ M ;
mbe2 = sum(abs(T_sim1' - T_train)) ./ N ;

disp(['训练集数据的MBE为:', num2str(mbe1)])
disp(['测试集数据的MBE为:', num2str(mbe2)])

%均方误差 MSE
mse1 = sum((T_sim1' - T_train).^2)./M;
mse2 = sum((T_sim2' - T_test).^2)./N;

disp(['训练集数据的MSE为:', num2str(mse1)])
disp(['测试集数据的MSE为:', num2str(mse2)])

References

[1] https://blog.csdn.net/kjm13182345320/article/details/128577926?spm=1001.2014.3001.5501
[2] https://blog.csdn.net/kjm13182345320/article/details/128573597?spm=1001.2014.3001.5501

Guess you like

Origin blog.csdn.net/kjm13182345320/article/details/132649709