Multivariate Input Time Series Forecasting with Deep Hybrid Kernel Extreme Learning Machine

 0. Foreword

        The time series prediction method of deep hybrid kernel extreme learning machine: firstly, multi-layer ELM-AE is used to realize abstract feature extraction, and then the extracted abstract features are used to train a hybrid kernel extreme learning machine to realize classification. The deep hybrid kernel extreme learning machine is actually composed of multi-layer extreme learning machine + HKELM.

1. Introduction to Theory

1.1 ELMA

 Both ELMAE and ELM have a three-layer network structure, but ELM-AE is an unsupervised learning algorithm, and its output is consistent with its input.

The output weight calculation formula of ELMAE is as follows:

1.2 Multilayer extreme learning machine ML-ELM

        ML-ELM uses ELM-AE for layer-by-layer training. When ML-ELM uses ELM-AE for training, the numerical relationship between the output of the i-th hidden layer and the output on the (i-1)-th hidden layer can be expressed by the following formula:

1.3 Hybrid Kernel Extreme Learning Machine (HKELM)

        HKELM's regression prediction implementation reference here

1.4 Deep Hybrid Kernel Extreme Learning Machine 

        The deep hybrid kernel extreme learning machine first uses multi-layer ELM-AE (ML-ELM) to extract the input data layer by layer to obtain more effective features; based on these more abstract features, the kernel function calculation is used to replace the high-dimensional space The inner product operation, so as to realize the feature mapping to a higher dimensional space for prediction, is conducive to further improving the accuracy and generalization performance of model prediction. The deep hybrid kernel extreme learning machine structure is as follows:

        

2. Deep Hybrid Kernel Extreme Learning Machine for Multivariate Input Time Series Forecasting

       Multivariate input time series forecasting refers to the fact that there are multiple input variables, and the values ​​of the input variables at several previous moments predict the output variable values ​​at the next moment. If the output variable is power, it is power forecasting, and if it is load value, it is load forecasting. In short, after replacing it with your own data, you can realize time series forecasting.

step=5;%前step个时刻的所有值预测下一个时刻的输出值,构建多变量滚动序列
for i=1:size(data,1)-step
    input(i,:,:)=data(i:i+step-1,:);
    output(i,:)=data(i+step,end);
end
input=input(:,:);

        First, establish a multi-layer ELM-AE network to realize deep feature extraction. The number of hidden layers set by the method in this blog post is 2, and the number of nodes is 100 and 50 respectively.

%elm-ae的参数
h=[100 ,50];%各隐含层节点数
TF='sig';%ELM-AE激活函数
lambda1=inf;%elm-ae的L2正则化系数

        Then set the top-level HKELM network: select the kernel function, set the kernel parameters and the weight ratio of the kernel function, as shown in the figure below

%核函数类型1.RBF_kernel 2.lin_kernel 3 poly_kernel 4 wav_kernel
kernel1='RBF_kernel';
kernel2='poly_kernel';

% 第一个核函数的核参数设置  详情看kernel_matrix
if strcmp(kernel1,'RBF_kernel')
    ker1_para=1; %rbf有一个核参数
elseif strcmp(kernel1,'lin_kernel')
    ker1_para=[]; %线性没有核参数
elseif strcmp(kernel1,'poly_kernel')
    ker1_para=[1,1]; %多项式有2个核参数
elseif strcmp(kernel1,'wav_kernel')
    ker1_para=[1,1,1]; %小波核有3个核参数
end
% 第二个核函数的核参数设置  详情看kernel_matrix
if strcmp(kernel2,'RBF_kernel')
    ker2_para=1; %rbf有一个核参数
elseif strcmp(kernel2,'lin_kernel')
    ker2_para=[]; %线性没有核参数
elseif strcmp(kernel2,'poly_kernel')
    ker2_para=[1,1]; %多项式有2个核参数
elseif strcmp(kernel2,'wav_kernel')
    ker2_para=[1,1,1]; %小波核有3个核参数
end

Prediction effect:

3. Slime mold optimized deep hybrid kernel extreme learning machine for multivariate input time series prediction

        Considering that the final prediction accuracy of the deep hybrid kernel extreme learning machine is affected by the kernel parameters, the slime mold optimization algorithm is used for optimization, and the fitness function is the prediction accuracy.

        The function extreme value optimization effect display of the slime mold optimization algorithm:

 4. Effect comparison

       

 It can be seen that the optimized deep hybrid kernel extreme learning machine is closer to the real value, and the prediction accuracy is higher.

Guess you like

Origin blog.csdn.net/m0_61363749/article/details/126161152