Activation function Activation Function

1, the role of the activation function

The excitation function does not use the words, each neural network are only linear transformation, the input superimposed multilayer still linear transformation. Because not enough ability to express the linear model, the nonlinear activation function may be introduced factors. 

2, Torch the excitation function

import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt

# Just do some function to watch the image data 
X = torch.linspace (-5,. 5, 200 is) # tensor 
x_np = x.numpy ()
 # convert into numpy type tensor type, matplotlib data types can only numpy

# Several common excitation function 
y_relu = torch.relu (X) .numpy ()
y_sigmoid = torch.sigmoid(x).numpy()
y_tanh = torch.tanh (x) .numpy ()
y_softplus = F.softplus (X) .numpy () # softplus there apos NO in Torch
 # y_softmax = F.softmax (X) SoftMax special, not directly on the probability of the display, can be used to classify 

plt.figure ( . 1, figsize = ( 8, 6 ))
plt.subplot ( 221 )
plt.plot(x_np, y_relu, c='red', label='relu')
plt.ylim (( -1, 5 ))
plt.legend (loc = ' best ' )

plt.subplot ( 222 )
plt.plot(x_np, y_sigmoid, c='red', label='sigmoid')
plt.ylim (( -0.2, 1.2 ))
plt.legend (loc = ' best ' )

plt.subplot ( 223 )
plt.plot(x_np, y_tanh, c='red', label='tanh')
plt.ylim (( -1.2, 1.2 ))
plt.legend (loc = ' best ' )

plt.subplot ( 224 )
plt.plot(x_np, y_softplus, c='red', label='softplus')
plt.ylim (( -0.2, 6 ))
plt.legend (loc = ' best ' )

plt.show()

 

Guess you like

Origin www.cnblogs.com/funnything/p/11024491.html