DataScience: -log depth discussions with the nonlinear transformation processing of data analysis in machine learning logarithmic transformation, sigmoid / softmax conversion

DataScience: -log depth discussions with the nonlinear transformation processing of data analysis in machine learning logarithmic transformation, sigmoid / softmax conversion

 

 

 

table of Contents

Nonlinear transformation of data processing in-depth discussion and analysis of machine learning

log logarithmic transformation

sigmoid / softmax conversion

Sigmoid function

Softmax function


 

 

Related articles
DataScience: in-depth discussion and analysis of linear transformation of data processing in machine learning - the standardization of standardization, normalization Normalization / Scaling ratio of the difference and connection
DataScience: in-depth discussion and analysis of non-linear transformation of data processing in machine learning - log logarithmic transformation, sigmoid / softmax conversion

 

 

Nonlinear transformation of data processing in-depth discussion and analysis of machine learning

log logarithmic transformation

         If a (a> 0, and a ≠ 1) of the b-th power equal to N, i.e., ab = N, then the number of b referred to a as a substrate of N logarithmic , denoted logaN = b (where a known logarithmic base number , N called the real number ), which is a logarithmic transformation.

 

 

sigmoid / softmax conversion

Reference article: the DL of AF: Machine Learning / depth study common activation function (sigmoid, softmax etc.) profiles, applications, computing FIG implementation code implementation details Raiders

Sigmoid function

       Sigmoid function is a common S-shaped function in biology, also known as S-shaped growth curve. [1]   In the information science, and by the nature of their single inverse function monocytogenes like, the Sigmoid function is often used as the neural network of the activation function , to map a variable between 0 and 1.

  • Advantages: smooth, easy derivation.
  • Disadvantages: calculated amount activation function, when evaluated error backpropagation gradient, derivation involving division; reverse propagation, where it is easy to disappear gradient occurs, the training could not complete the deep network.
  •  

Softmax function

       In mathematics, especially in probability theory and related fields, normalized exponential function, also known as Softmax function is a logical function An extended. It will be a real number with an arbitrary K-dimensional vectors z "compressed" to another K-dimensional real vector σ (z) such that the range of each element in between (0,1), and all of the elements and 1. The multi-function than classification problems.

import math
 
z = [1.0, 2.0, 3.0, 4.0, 1.0, 2.0, 3.0]
 
z_exp = [math.exp(i) for i in z]  
 
print(z_exp)  # Result: [2.72, 7.39, 20.09, 54.6, 2.72, 7.39, 20.09] 
 
sum_z_exp = sum(z_exp)  
print(sum_z_exp)  # Result: 114.98 
# Result: [0.024, 0.064, 0.175, 0.475, 0.024, 0.064, 0.175]
 
softmax = [round(i / sum_z_exp, 3) for i in z_exp]
print(softmax)  

 

 

 

 

Released 1638 original articles · won praise 6863 · Views 13,030,000 +

Guess you like

Origin blog.csdn.net/qq_41185868/article/details/104965966