Andrew Ng vectorization (Vectorization)

Vectorization (Vectorization)

In the algorithm depth learning, we usually have a lot of data, in the process of writing a program, you should make the greatest possible use less loop loop using python can implement matrix operations, and thus to improve the operating speed of the program, to avoid the for loop usage of.

Logistic regression to quantify

Input matrix XX: (nx, m) ( nx, m)
weight matrix ww: (nx, 1) ( nx, 1)
offset bb: a constant
output matrix YY: (1, m) ( 1, m)
of all Z m linear output samples can be represented by a matrix:

Z=wTX+bZ=wTX+b
Z = np.dot(w.T,X) + b
A = sigmoid(Z)

Key Summary:
all linear samples ZZ m outputs can be represented by a matrix:

= + BZ = Wtx the Z + B Wtx
the Z = np.dot (wT, X-) + B
A = Sigmoid (the Z)
. 1
2
logistic regression outputs quantized gradient descent

For samples dZdZ mm, dimension (1, m) (1, m), expressed as:

= A-YDZ = dZ A the Y-
DB can be expressed as:

= = 1DZ 1mΣmi DB (I) = DB = 1mΣi 1mdz (I)
DB =. 1 / m np.sum (dZ)
. 1
DW can be expressed as:
DW = 1mX⋅dZTdw 1mX⋅dZT =
DW =. 1 / m
np.dot (X-, dZ.T)

Single iteration gradient descent algorithm process:

Z = np.dot (wT, X) + B
A = sigmoid (Z)
dZ = Y
d = 1 / m np.dot (X, dZ.T)
db = 1 / m
np.sum (dZ)

w = w - alphadw
b = b - alpha
db

Neural network structure in the corresponding figures, we use only four equations Python code to achieve the right to achieve the output of the neural network calculation.

Here Insert Picture Description

Here Insert Picture Description

Here Insert Picture Description

Here Insert Picture Description

Key Summary:
neural network gradient descent

In this section shallow neural network, for example, gradient descent equation gives us a neural network.

Parameters: W [1], b [ 1], W [2], b [2] W [1], b [1], W [2], b [2];
Enter the number of feature vector layers: nx = n [0] nx = n [ 0];
hidden layer neuron number: n [1] n [1 ];
number of neurons in the output layer: n-[2] = 1N [2] =. 1;
W is [. 1] W [1] of dimension (n [1], n [ 0]) (n [1], n [0]), b [1] b [1] of dimension (n [1], 1) ( n-[. 1],. 1);
W is [2] W is [2] of dimension (n [2], n [ 1]) (n [2], n [1]), b [2] b [2] the dimension (n [2], 1) (n [2], 1);
the following example of a neural network for a reverse gradient descent equation (left) and the quantization code to its (right):
Here Insert Picture Description

Published 21 original articles · won praise 47 · views 110 000 +

Guess you like

Origin blog.csdn.net/h_l_dou/article/details/102667773