25. TensorFlow Tutorial---Suggestions for creating graph neural network training

In this chapter, we will look at various aspects of neural network training that can be achieved using the TensorFlow framework.

Here are ten suggestions that can be evaluated −

Back Propagation
Back propagation is a simple method for calculating partial derivatives, including basic combination forms suitable for neural networks.

Stochastic Gradient Descent
In stochastic gradient descent, a batch is the total number of examples that the user uses to calculate the gradient in a single iteration. So far, it has been assumed that the batch is the entire data set. The best examples are working at Google scale; data sets often contain billions or even hundreds of billions of examples.

Learning Rate Decay

Adaptive learning rate is one of the most important features in gradient descent optimization. This is crucial to the implementation of TensorFlow.

Dropout
, a deep neural network with a large number of parameters, forms a powerful machine learning system. However, overfitting is a serious problem in these networks.

</

Guess you like

Origin blog.csdn.net/Knowledgebase/article/details/133460198