Regression prediction | MATLAB implements TCN-LSTM time convolution long short-term memory neural network multi-input single-output regression prediction

Regression prediction | MATLAB implements TCN-LSTM time convolution long short-term memory neural network multi-input single-output regression prediction

predictive effect

1
2
3
4

5
6

basic introduction

1. Matlab implements TCN-LSTM temporal convolutional neural network combined with long-term and short-term memory neural network multivariate regression prediction; 2. The
operating environment is Matlab2021b;
3. Input multiple features, output a single variable, and multivariate regression prediction;
4. data is a data set, excel data, the first 7 columns are input, and the last column is output. MainTCN_LSTMNN
.

Model description

Since TCN has an expanded causal convolution structure and has outstanding feature extraction capabilities, the original features can be fused to obtain high-dimensional abstract features, which strengthens the mining of feature information. The LSTM
network has a powerful time series prediction ability. Combining TCN and LSTM network, the TCN feature extraction is input to the LSTM network, which improves the processing efficiency of the memory unit of the LSTM network and makes the prediction model learn more effectively the complex interactive relationship of time series
. Therefore, this paper builds a TCN-LSTM prediction model.

7

TCN-LSTM is a neural network model that combines temporal convolutional neural network (TCN) and long short-term memory neural network (LSTM). TCN is a convolutional neural network capable of processing sequence data, which can capture long-term dependencies in sequences. LSTM is a recurrent neural network with memory cells that can handle both short-term and long-term dependencies in sequence data.
The input of the TCN-LSTM model can be multiple sequences, and each sequence can be a different feature or variable. For example, if we want to predict the average temperature of a certain city for the next week, we can use multiple variables such as temperature, humidity, air pressure, etc. in the past period of time as an input sequence. The output of the model is a value, the average temperature at some point in the future.
In TCN-LSTM, the temporal convolution layer is used to capture the long-term dependencies in the sequence, and the LSTM layer is used to deal with the short-term and long-term dependencies in the sequence. Multiple input sequences are merged into one tensor, which is then fed into the TCN-LSTM network for training. During training, the model optimization goal is to minimize the gap between the predicted output and the true value.
TCN-LSTM models perform well on time series forecasting and regression problems, especially for long-term dependent sequence data. It can be used in many application scenarios, such as stock price prediction, traffic flow prediction, etc.

programming

  • Complete source code and data acquisition method 1: Private message bloggers reply MATLAB to realize TCN-LSTM time convolution long short-term memory neural network multi-input single-output regression prediction ;
  • Complete program and data download method 2 (subscribe to the "Combined Optimization" column, and at the same time obtain any 8 programs included in the "Combined Optimization" column, private message me after the data is subscribed): MATLAB implementation MATLAB implementation TCN-LSTM time convolution long short-term memory neural network multi-input single-output regression prediction
%% 预测
t_sim1 = predict(net, p_train); 
t_sim2 = predict(net, p_test ); 

%%  数据反归一化
T_sim1 = mapminmax('reverse', t_sim1, ps_output);
T_sim2 = mapminmax('reverse', t_sim2, ps_output);

%%  均方根误差
error1 = sqrt(sum((T_sim1' - T_train).^2) ./ M);
error2 = sqrt(sum((T_sim2' - T_test ).^2) ./ N);


%%  相关指标计算

%  MAE
mae1 = sum(abs(T_sim1' - T_train)) ./ M ;
mae2 = sum(abs(T_sim2' - T_test )) ./ N ;

disp(['训练集数据的MAE为:', num2str(mae1)])
disp(['测试集数据的MAE为:', num2str(mae2)])

%% 平均绝对百分比误差MAPE
MAPE1 = mean(abs((T_train - T_sim1')./T_train));
MAPE2 = mean(abs((T_test - T_sim2')./T_test));

disp(['训练集数据的MAPE为:', num2str(MAPE1)])
disp(['测试集数据的MAPE为:', num2str(MAPE2)])

%  MBE
mbe1 = sum(abs(T_sim1' - T_train)) ./ M ;
mbe2 = sum(abs(T_sim1' - T_train)) ./ N ;

disp(['训练集数据的MBE为:', num2str(mbe1)])
disp(['测试集数据的MBE为:', num2str(mbe2)])

%均方误差 MSE
mse1 = sum((T_sim1' - T_train).^2)./M;
mse2 = sum((T_sim2' - T_test).^2)./N;

disp(['训练集数据的MSE为:', num2str(mse1)])
disp(['测试集数据的MSE为:', num2str(mse2)])

summarize

References

[1] https://blog.csdn.net/kjm13182345320/article/details/128577926?spm=1001.2014.3001.5501
[2] https://blog.csdn.net/kjm13182345320/article/details/128573597?spm=1001.2014.3001.5501

Guess you like

Origin blog.csdn.net/kjm13182345320/article/details/131821766