Techniques for Setting the Number of Hidden Layer Neurons in Neural Networks

Recently, I have studied neural networks. I often see the design of the input layer, hidden layer, and output layer. Needless to say, the input and output layers will know the data. I am very confused about the neuron design of the hidden layer. I have found some information, and I have some understanding.
Method 1:
FangfaGorman pointed out that the relationship between the number of hidden layer nodes s and the number of patterns N is: s=log2N;

Method 2:
Kolmogorov's theorem shows that the number of hidden layer nodes s=2n+1 (n is the number of input layer nodes);

Method three:
s = sqrt (0.43mn + 0.12nn + 2.54m + 0.77n + 0.35) + 0.51
(m is the number of input layers, n is the number of output layers).


The importance of the proper number of neurons
If the number of hidden layer nodes is too small, the network cannot have the necessary learning ability and information prediction processing ability. On the contrary, if too much, not only will greatly increase the complexity of the network structure (this is especially important for hardware-implemented networks), the network is more likely to fall into a local minimum during the learning process, and the learning speed of the network will become very slow. . The selection of the number of hidden layer nodes has always been highly valued by neural network researchers.

Guess you like

Origin blog.csdn.net/lxxlxx888/article/details/102626780