Interval prediction | MATLAB implements QRLSTM long short-term memory neural network quantile regression time series interval prediction

Interval prediction | MATLAB implements QRLSTM long short-term memory neural network quantile regression time series interval prediction

List of effects

  • advanced version
    1
    2
    3

  • basic version

1

basic introduction

MATLAB implements QRLSTM long short-term memory neural network quantile regression time series interval prediction
QRLSTM is a model based on long short-term memory (LSTM) neural network for time series interval prediction. It makes predictions using quantile regression, which means it predicts a range of possible outcomes, not just a single point prediction.
Specifically, QRLSTM uses an LSTM network to learn long-term and short-term dependencies of time series, and then uses quantile regression to predict a range of possible outcomes. Quantile regression is a very useful technique for predicting upper and lower bounds for a given confidence level, which is useful for time series forecasting.
The predictive power of the QRLSTM model is strong, especially when dealing with nonlinear time series. It has been widely used in stock market, weather forecast, traffic forecast and other fields.

Model description

The mathematical formulation of the QRLSTM model is as follows:
First, we define the hidden state and cell state in the LSTM network:

h t , c t = LSTM ( x t , h t − 1 , c t − 1 ) h_t,c_t=\text{LSTM}(x_t,h_{t-1},c_{t-1}) ht,ct=LSTM(xt,ht1,ct1)

  • Among them, xt x_txtis the time step ttThe input of t ,ht − 1 h_{t-1}ht1and ct − 1 c_{t-1}ct1are the hidden state and cell state at the previous time step, respectively.

Then, we define the loss function for quantile regression:

L τ = ∑ i = 1 n ρ τ ( yi − f θ ( xi ) ) \mathcal{L}{\tau}=\sum{i=1}^{n}\rho_{\tau}(y_i-f_ {\theta}(x_i))L sq=i=1nρt(yifi(xi))

  • Among them, τ \tauτ is the quantile level,yi y_iyiis the time series at time step iiThe true value of i , f θ ( xi ) f_{\theta}(x_i)fi(xi) is the model at time stepiiThe predicted value of i ,ρ τ ( u ) \rho_{\tau}(u)rt( u ) is the quantile loss function:

ρ τ ( u ) = { τ u  if  u ≥ 0   ( τ − 1 ) u  if  u < 0 \rho_{\tau}(u)=\begin{cases} \tau u & \text{ if } u \geq 0 \ (\tau-1)u & \text{ if } u < 0 \end{cases} rt(u)={ t u if u0 ( t 1)u if u<0

Ultimately our goal is to minimize the loss function at all quantile levels:

L = ∑ τ ∈ τ 1 , τ 2 , . . . , τ TL τ \mathcal{L}=\sum_{\tau\in{\tau_1,\tau_2,...,\tau_T}}\mathcal{L}_{\tau}L=ττ1, t2, . . . , tTLt

  • Among them, τ 1 , τ 2 , . . . , τ T {\tau_1,\tau_2,...,\tau_T}t1,t2,...,tTis a set of quantile levels.

The QRLSTM model uses stochastic gradient descent or other optimization algorithms to minimize the above loss function to obtain the optimal model parameters.

programming

%-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
% lstm
layers = [ ...
    sequenceInputLayer(inputSize,'name','input')   %输入层设置
    lstmLayer(numhidden_units1,'Outputmode','sequence','name','hidden1') 
    dropoutLayer(0.3,'name','dropout_1')
    lstmLayer(numhidden_units2,'Outputmode','last','name','hidden2') 
    dropoutLayer(0.3,'name','drdiopout_2')
    fullyConnectedLayer(outputSize,'name','fullconnect')   % 全连接层设置(影响输出维度)(cell层出来的输出层) %
    quanRegressionLayer('out',i)];

% 参数设定
opts = trainingOptions('adam', ...
    'MaxEpochs',10, ...
    'GradientThreshold',1,...
    'ExecutionEnvironment','cpu',...
    'InitialLearnRate',0.001, ...
    'LearnRateSchedule','piecewise', ...
    'LearnRateDropPeriod',2, ...   %2个epoch后学习率更新
    'LearnRateDropFactor',0.5, ...
    'Shuffle','once',...  % 时间序列长度
    'SequenceLength',1,...
    'MiniBatchSize',24,...
    'Verbose',0);

%-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
% 网络训练
tic
net1 = trainNetwork(xnorm,ynorm,layers,opts);
trainNetwork(xnorm,ynorm,layers,opts);
end
————————————————
版权声明:本文为CSDN博主「机器学习之心」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/kjm13182345320/article/details/127380096

References

[1] https://blog.csdn.net/kjm13182345320/article/details/127931217
[2] https://blog.csdn.net/kjm13182345320/article/details/127418340

Guess you like

Origin blog.csdn.net/kjm13182345320/article/details/130586921