20. TensorFlow Tutorial---Optimizer

Optimizers are extension classes that contain additional information needed to train a specific model. The optimizer class is initialized with the given parameters, but it is important to remember that a Tensor is not required. Optimizers are used to improve the speed and performance of training a specific model.

TensorFlow’s basic optimizer is −

tf.train.Optimizer

This class is defined in the specified path of tensorflow/python/training/optimizer.py.

Here are some optimizers in TensorFlow −

- Stochastic Gradient Descent
- Stochastic Gradient Descent with Gradient Clipping -
Momentum
- Nesterov Momentum
- Adagrad
- Adadelta
- RMSProp
- Adam
- Adamax
- SMORMS3

We will focus on stochastic gradient descent optimizers. An example of creating this optimizer is shown below −

def sgd(cost, params, lr = np.float32(0.01)):
   g_params = tf.gradients(cost, params)
   updates = []
   
   for param, g_param in zip(params, g_params):
      updates.append(param.assign(param - lr*g_param))
   return updates

Basic parameters are defined within specific functions. In our subsequent chapters, we will focus on gradient descent optimization and the implementation of the optimizer.

Guess you like

Origin blog.csdn.net/Knowledgebase/article/details/133459813