Interval prediction | MATLAB implements multi-variable time series interval prediction based on QRCNN-LSTM-Multihead-Attention multi-head attention convolution long-short-term memory neural network

Interval prediction | MATLAB implements multi-variable time series interval prediction based on QRCNN-LSTM-Multihead-Attention multi-head attention convolution long-short-term memory neural network

List of effects

1
2
3
4
5

6
7

basic introduction

1. Matlab implements QRCNN-LSTM-Multihead-Attention convolutional neural network combined with long-short-term memory neural network multi-head attention multivariate time series interval prediction; 2. Multi-
map output, point prediction multi-index output (MAE, MAPE, RMSE, MSE, R2), interval prediction multi-point ratio output (interval coverage PICP, interval average width percentage PINAW), multiple input single output, including point prediction map, different confidence interval prediction map, error analysis map, kernel density estimation probability density map ;
3. data is a data set, power data set, using multiple associated variables to predict the last column of power data, which is also applicable to load forecasting and wind speed forecasting; MainQRCNN_LSTM_MATTNTS is the main program, and the rest are function files, no need to run; 4.
Code High quality, clear annotation, including data preprocessing part, processing missing values, if it is nan, delete it, and also includes kernel density estimation;
5. The operating environment is Matlab2021 and above.

Model description

Multi-Head Attention Convolutional LSTM (MHAC-LSTM) is a deep learning model for multivariate time series forecasting problems. It combines convolutional neural network (CNN) and long short-term memory neural network (LSTM), and uses multi-head attention mechanism to enhance the expressive ability of the model.
Each input time series variable is passed through a convolutional layer for feature extraction, and the output of the convolutional layer is passed to an LSTM layer for time series modeling. Then, a multi-head attention mechanism is applied to the output of the LSTM layer to capture the relationship and importance among different variables, thereby improving the predictive performance of the model. In general, it is a powerful deep learning model suitable for multivariate time series forecasting problems, especially interval forecasting problems. It can capture the spatial and temporal characteristics of input data by combining convolution, LSTM, and attention mechanisms, and consider the relationship between different variables when predicting, thereby improving prediction accuracy.
Multi-Head Attention (Multi-Head Attention) is a mechanism for enhancing the expressive ability of neural networks, which is often used in modeling tasks dealing with sequence data, such as machine translation, language generation and speech recognition.
In the traditional attention mechanism, the model calculates the influence weight of each input position on the target position by calculating the correlation between each position in the input sequence and the target position. Multi-head attention, on the other hand, captures different correlation representations by combining multiple independent attention mechanisms.
Specifically, multi-head attention divides the input sequence into multiple heads, and assigns each head a set of parameters, and then applies a separate attention mechanism in each head. In this way, the model can learn multiple correlation representations at the same time, which improves the model's ability to express the input.
When computing multi-head attention, the model first maps the input into different spaces through multiple independent linear transformations, and then performs attention calculation on the mapping results in each head. Finally, the attention calculation results of each head are combined through another linear transformation and output through an activation function.
The advantage of multi-head attention is that it can simultaneously capture multiple related representations, thereby improving the expressiveness and generalization ability of the model. In addition, multi-head attention can further improve the expressive ability of the model by stacking multiple layers, forming the so-called multi-layer multi-head attention (Multi-Layer Multi-Head Attention).
Multi-head attention has been widely used in natural language processing, image processing, and speech processing, and has become an important modeling tool in deep learning.

programming

  • Complete program and data acquisition method: private message blogger.
ntrain=round(nwhole*num_size);
	ntest =nwhole-ntrain;
	% 准备输入和输出训练数据
	input_train =input(:,temp(1:ntrain));
	output_train=output(:,temp(1:ntrain));
	% 准备测试数据
	input_test =input(:, temp(ntrain+1:ntrain+ntest));
	output_test=output(:,temp(ntrain+1:ntrain+ntest));
	%% 数据归一化
	method=@mapminmax;
	[inputn_train,inputps]=method(input_train);
	inputn_test=method('apply',input_test,inputps);
	[outputn_train,outputps]=method(output_train);
	outputn_test=method('apply',output_test,outputps);
	% 创建元胞或向量,长度为训练集大小;
	XrTrain = cell(size(inputn_train,2),1);
	YrTrain = zeros(size(outputn_train,2),1);
	for i=1:size(inputn_train,2)
		XrTrain{
    
    i,1} = inputn_train(:,i);
		YrTrain(i,1) = outputn_train(:,i);
	end
	% 创建元胞或向量,长度为测试集大小;
	XrTest = cell(size(inputn_test,2),1);
	YrTest = zeros(size(outputn_test,2),1);
	for i=1:size(input_test,2)
		XrTest{
    
    i,1} = inputn_test(:,i);
		YrTest(i,1) = outputn_test(:,i);
	end

	%% 创建混合CNN-LSTM网络架构
%%  区间覆盖率
RangeForm = [T_sim(:, 1), T_sim(:, end)];
Num = 0;

for i = 1 : length(T_train)
    Num = Num +  (T_train(i) >= RangeForm(i, 1) && T_train(i) <= RangeForm(i, 2));
end

picp = Num / length(T_train);     


    S = cumtrapz(X,Y);
    Index = find(abs(m-S)<=1e-2);
    Q = X(max(Index));

References

[1] https://blog.csdn.net/kjm13182345320/article/details/127931217
[2] https://blog.csdn.net/kjm13182345320/article/details/127418340

Guess you like

Origin blog.csdn.net/kjm13182345320/article/details/130997959