Machine learning pytorch platform code study notes (3) - incentive function

Your neural network layer has only two or three layers, not many times. For the hidden layer, it is possible to use any activation function, and it will not have a particularly large impact. However, when you use a special multi-layer neural network, you must not Choose a weapon at will. Because this will involve the problem of gradient explosion and gradient disappearance.

In a small number of layers, we can try many different activation functions. In the convolutional layers of Convolutional neural networks, the recommended activation function is relu. In recurrent neural networks, the recommended activation function is tanh or relu.

There are many excitation functions in Torch, but these are the ones we usually use: relu, sigmoid, tanh, softplus. 


import torch
import torch.nn.functional as F # The excitation functions are all here
from torch.autograd import Variable

# do some fake data to watch the image
x = torch.linspace(-5, 5, 200)  # x data (tensor), shape=(100, 1)
x = Variable(x)
x_np = x.data.numpy() # Change to numpy array, use when plotting

# Several commonly used excitation functions, the following are converted into numpy arrays, all for the final plot
y_relu = F.relu (x) .data.numpy ()
y_sigmoid = F.sigmoid(x).data.numpy()
y_tanh = F.tanh(x).data.numpy()
y_softplus = F.softplus(x).data.numpy()
# y_softmax = F.softmax(x) softmax is special and cannot be displayed directly, but it is about probability and is used for classification
import matplotlib.pyplot as plt # Plotting: a visualization module for python

plt.figure(1, figsize=(8, 6))
plt.subplot(221)
plt.plot (x_np, y_relu, c = 'red', label = 'relu')
plt.ylim((-1, 5))
plt.legend(loc='best')

plt.subplot(222)
plt.plot(x_np, y_sigmoid, c='red', label='sigmoid')
plt.ylim((-0.2, 1.2))
plt.legend(loc='best')

plt.subplot(223)
plt.plot(x_np, y_tanh, c='red', label='tanh')
plt.ylim((-1.2, 1.2))
plt.legend(loc='best')

plt.subplot(224)
plt.plot(x_np, y_softplus, c='red', label='softplus')
plt.ylim((-0.2, 6))
plt.legend(loc='best')

plt.show()

Reference link:

https://morvanzhou.github.io/tutorials/machine-learning/torch/2-03-A-activation-function/

https://morvanzhou.github.io/tutorials/machine-learning/torch/2-03-activation/


Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325442816&siteId=291194637