Deep Learning-3 Matrix Calculation and Derivatives

All optimization models can be optimized through derivation. The notes come from teacher Li Mu.

1. Scalar derivation, the derivative is the slope of the tangent line

 2. Sub-countdown

 3. Gradient

 1. Case 1: y is a scalar and x is a vector, and the gradient points in the direction of the largest change value

 

 

<> represents the inner product

2. Case 2, y is a scalar and x is a column vector

 

 3. Case three, x and y are both vectors and will become matrices

 

 

Guess you like

Origin blog.csdn.net/weixin_68479946/article/details/128970964