Regression fitting | Gray Wolf algorithm optimization kernel extreme learning machine (GWO-KELM) MATLAB implementation

This week I received a private message from a fan asking me to publish an article on GWO-KELM, so I took advantage of today’s break and updated it (hope it’s not too late)

Insert image description here

The author introduced the principles and implementation of ELM and KELM in the previous article. ELM has the advantages of fast training speed, low complexity, and overcoming the local minima, overfitting and inappropriate selection of learning rate of the traditional gradient algorithm. KELM uses the kernel learning method and uses kernel mapping instead of random mapping, which can effectively improve the problem of reduced generalization and stability caused by random assignment of hidden layer neurons, and has better performance when applied to nonlinear problems [1 ].

The Gray Wolf Optimization Algorithm (GWO) achieves the purpose of optimization by simulating the predatory behavior of gray wolf groups and based on the wolf group collaboration mechanism. This mechanism has achieved good results in balancing exploration and development, and has improved convergence speed and solution. It has good performance in terms of accuracy, has the characteristics of simple principle, parallelism, easy implementation, few parameters that need to be adjusted and no gradient information of the problem, and strong global search ability.

Therefore, the author will use ELM and KELM combined with the gray wolf optimization algorithm to apply to the regression fitting problem, and compare it with traditional machine learning algorithms such as BP neural network and PSO-KELM.

00 catalog

1 GWO-KELM model
2 Code directory
3 Prediction performance
4 Source code acquisition
References

01 GWO-KELM model

1.1 GWO and KELM principles

GWO is the Gray Wolf Optimization Algorithm, and KELM is the Kernel Extreme Learning Machine. The author has explained its specific principles in the previous article. The link to the article is as follows, and I will not repeat it here.

The principle of nuclear extreme learning machine and its MATLAB code implementation.
The principle of gray wolf optimization algorithm and its MATLAB code implementation.

1.2 GWO-KELM prediction model

Combine GWO with KELM, and use the MAE predicted by the KELM model as the fitness of GWO. The model process is as follows:
Insert image description here

02 Code directory

Insert image description here

Among them, MY_XX_Reg.m is the main program that can be run independently, and result.m is used to compare the prediction effects of different algorithms. result.m can run 5 MY_XX_Reg.m in sequence and compare their prediction results.

03 Predictive performance

3.1 Evaluation indicators

In order to verify the accuracy and precision of the prediction results, the root mean square error (Root Mean Square Error, RMSE), the mean absolute percentage error (Mean Absolute Percentage Error, MAPE) and the mean absolute value error (Mean Absolute Error, MAE) are used respectively. as evaluation criteria.
Insert image description here

In the formula, Yi and Y ^ i are the true value and the predicted value respectively; n is the number of samples.

3.2 Comparison of results
Insert image description here

Insert image description here
Insert image description here

Insert image description here

It can be seen that among the prediction models, the KELM prediction model optimized by GWO and PSO has achieved good results, and the prediction performance of GWO is better than that of PSO, and its convergence speed and accuracy are much better than PSO.

04 Source code acquisition

On the author’s public account: KAU’s cloud experimental platform

references

[1] Huang G B,Zhou H M,Ding X J,et al.Extreme learning machine for regression and multiclass classification[J].IEEE Transactions on Systems, Man,and Cybernetics,Part B (Cybernetics),2012,42(2):513.

Another note: If any partners have optimization problems to be solved (in any field), you can send them to me, and I will selectively update articles that use optimization algorithms to solve these problems.

If this article is helpful or inspiring to you, you can click the like (ง •̀_•́)ง in the lower right corner (you don’t have to click). If you have any customization needs, you can send a private message to the author.

Guess you like

Origin blog.csdn.net/sfejojno/article/details/132636532