ELM is a kind of Feedforward Neuron Network with only a single hidden layer. With the randomly selected input weight (w, b)
, the actual learning part of the network is the output weight β
for supervised learning.
In actual training, there are 2 most important parameters you should judge:
- the number of neurons of hidden layer
L
- the regularization parameter
C
of ridge regression
There are some good material for learning the theory and the code of ELM, which I used:
-
Most helpful article! It introduces the basic knowledge of mathematical about Moore-Penrose and minimum norm Least-Squares solution at first. Then explain the algorithm pretty clear!!
极限学习机简介 -
Simple and clear explanation with short but enough code:
Extreme Learning Machines 极限学习机 -
Pretty detail about the computation of
β
, explain the disadvantages
of BP and the which situations should use which formula:
极限学习机应用于入侵检测(一) -
Discuss the importance of normalization:
ELM python实现 -
Also a good material for learning:
极限学习机(ELM)从原理到程序实现(附完整代码)