第四周 浅层神经网络

基本神经网络结构

在这里插入图片描述如图是一个两层神经网络结构。(通常输入层不算是一个完整的层)
每层的公式如下:
Z 1 [ 1 ] = w 1 [ 1 ] T x + b 1 [ 1 ] , a 1 [ 1 ] = α ( Z 1 [ 1 ] ) Z^{[1]}_1 = w_1^{[1]T}x + b_1^{[1]}, a^{[1]}_1= \alpha(Z_1^{[1]})
. . . . . . . .......
中间的一层成为隐藏层
在每一层神经网络中都有w和b两个参数,计算公式如下:
Z [ 1 ] = W [ 1 ] x + b [ 1 ] Z^{[1]} = W^{[1]}x + b^{[1]}
a [ 1 ] = α ( Z [ 1 ] ) a^{[1]}= \alpha(Z^{[1]})
Z [ 2 ] = W [ 2 ] a [ 1 ] + b [ 2 ] Z^{[2]} = W^{[2]}a^{[1]} + b^{[2]}
a [ 2 ] = α ( z [ 2 ] ) a^{[2]} = \alpha(z^{[2]})

激活函数

在上面的公式中,除了w和b两个参数,还有一个未知数 α \alpha
即为激活函数
如果没有激活函数,直接使 a [ 1 ] = Z [ 1 ] a^{[1]}= Z^{[1]}
那么代入整理公式后可发现,输出 a[2]只是a[1]的线性组合,这样隐藏层的存在就没有任何意义。
(如果研究的是一个回归问题,如输入x为一串数字,输出也是一串数字,那线性激活函数也许有用
在这里插入图片描述在神经网络中,有四种常见的激活函数:
1、最基础的Sigmoid函数,但是除非是在二分法的输出层,否则一般不用;
2、Tan函数,几乎在所有情况都好于sigmoid函数,但是当z很大时,导数趋近于0,降低了机器学习的速度;
3、RELU函数(修正线性单元),大多数的选择,如果不知道用哪种激活函数,就选择这个。
4、the leakey RELU(遗漏RELU函数),导数不会变为0,而是趋于0的平缓值
可求得以下公式:
1、Sigmoid g ( z ) = a = 1 1 + e z g(z) =a = \frac {1} {1+e^{-z} }
g z = a ( 1 a ) g^{z} = a(1-a)
2、Tan g ( z ) = e z e z e z + e z g(z) = \frac {e^z-e^{-z}} {e^z+e^{-z}}
g z = 1 a 2 g^{z} = 1-a^2
3、RELU g ( z ) = m a x ( 0 , z ) g(z) = max(0,z)
g z = { 0 x<0 1 x>0 x=0 g^z= \begin{cases} 0& \text{x<0}\\ 1& \text{x>0}\\ 自定义 &\text{x=0} \end{cases}
Leaky RELU g ( z ) = m a x ( 0. o 1 z , z ) g(z) = max(0.o1z,z)
g z = { 0.01 x<0 1 x>=0 g^z= \begin{cases} 0.01& \text{x<0}\\ 1& \text{x>=0} \end{cases}

随机初始化

在实际应用中,参数w和b都进行随机初始化
例: w [ 1 ] = n p . r a n d o m . r a n d n ( ( 2 , 2 ) ) 0.01 w^{[1]} = np.random.randn((2,2))*0.01

np.sum

np.sum(x,axis,keepdim)
axis 维度,第一个为0,第二个为1,以此类推,(注意在numpy中是先列后行
keepdim 保持维度,防止计算后维度被消灭

错题本

a[m] (n)q表示第m个训练样本上的第n层神经网络的第q个神经元
X is a matrix in which each column is one training example.
关于w,b维度问题
在这里插入图片描述在这里插入图片描述此题中,w[1] = [4,2].w[2] = [1,4]
b[1] = [4,1],b[2] = [1,1]

Python实现

1、定义神经网络结构

def layer_sizes(X,Y):
"""
Arguments:
X ----- input dataset of shape (input size,number of examples)
Y ----- labels of shape (output size,nuber of examples)

Returns:
n_x --  the size of the input layer 
n_h --  the size of the hidden layer (set this to 4)
n_y -- the size of the output layer
"""
n_x = X.shape[0]
n_h = 4
n_y = Y.shape[0]
return (n_x,n_h,n_y)

2、初始化模型参数

def initialize_parameters(n_x,n_h,n_y):
	"""
	Argument:
	n_x --  the size of the input layer 
	n_h --  the size of the hidden layer (set this to 4)
	n_y -- the size of the output layer
	
	Returns:
	params -- python dictionary containing your parameters:
					W1 -- weight matrix of shape(n_h,n_x)
					b1 -- bias vector of shape (n_h,1)
					W2 -- weight matrix of shape(n_y,n_h)
					b2 -- bias vector of shape (n_y,1)
	"""
	W1 = np.random.randn(n_h,n_x) * 0.01
    b1 = np.zeros((n_h,1))
    W2 = np.random.randn(n_y,n_h) * 0.01
    b2 = np.zeros((n_y,1))
    ### 设置断言 ###
    assert (W1.shape == (n_h, n_x))
    assert (b1.shape == (n_h, 1))
    assert (W2.shape == (n_y, n_h))
    assert (b2.shape == (n_y, 1))
	 parameters = {"W1": W1,
                  "b1": b1,
                  "W2": W2,
                  "b2": b2}
    
    	return parameters

3、设置循环进行前向传播

def forward_propagation(X,parameters):
	"""
	Argument:
	X -- input data of size (n_x,m)
	parameters -- python dictionary containing your parameters (output of initialize_parameters)
	Returns:
	A2 -- The sigmoid output of the second activation
	cache -- a dictionary containing "Z1","A1","Z2",and "A2"
	"""
	W1 = parameters["W1"]
	b1 = parameters["b1"]
    W2 = parameters["W2"]
    b2 = parameters["b2"]
    
	Z1 = np.dot(W1,X) + b1
    A1 = np.tanh(Z1)
    Z2 = np.dot(W2,A1) + b2
    A2 = sigmoid(Z2)
    
     assert(A2.shape == (1, X.shape[1]))    
     cache = {"Z1": Z1,
             "A1": A1,
             "Z2": Z2,
             "A2": A2}
      return A2, cache
	

4、计算损失函数

def compute_cost(A2, Y, parameters):
    """
    Arguments:
    A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
    Y -- "true" labels vector of shape (1, number of examples)
    parameters -- python dictionary containing your parameters W1, b1, W2 and b2
    
    Returns:
    cost -- cross-entropy cost given equation (13)
    """
    
    m = Y.shape[1] # number of example
    logprobs = np.multiply(np.log(A2),Y) + np.multiply((1 - Y),np.log(1-A2))
    cost = - np.sum(logprobs) / m
    cost = float(np.squeeze(cost))     # makes sure cost is the dimension we expect.                 
    assert(isinstance(cost, float))
    
    return cost

5、反向传播(重难点)
采用梯度下降法优化参数
在这里插入图片描述根据上述公式,定义反向传播算法:

def backward_propagation(parameters,cache,X,Y):
	"""
	Arguments:
	parameters -- python dictionary containing our parameters
	cache -- a dictionary containing "Z1","A1","Z2","A2"
	X -- input data of shape (2,number of examples)
	Y -- "true" labels vector of shape (1,number of examples)
	Returns:
	grads -- python dictionary containing your gradients with respects to different parameters
	"""
	m = X.shape[1]
	
 	W1 = parameters["W1"]
    W2 = parameters["W2"]
    
    A1 = cache["A1"]
    A2 = cache["A2"]
    
    #注意公式在python中的表示
     dZ2 = A2 - Y
    dW2 = (1/m)*np.dot(dZ2,A1.T)
    db2 = (1/m) * np.sum(dZ2,axis = 1,keepdims = True)
    dZ1 = np.multiply(np.dot(W2.T,dZ2),1-np.power(A1,2))
    dW1 = (1/m) *np.dot(dZ1,X.T)
    db1 = (1/m) * np.sum(dZ1,axis = 1,keepdims = True)
    
    grads = {"dW1": dW1,
             "db1": db1,
             "dW2": dW2,
             "db2": db2}
    
    return grads

6、更新参数

def update_parameters(parameters,grads,learning_rate = 1.2):
"""
Update parameters using the gradient descent update rule given above

Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients

Returns:
parameters -- python dictionary containing your update paraments
"""
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]

dW1 = grads["dW1"]
    db1 = grads["db1"]
    dW2 = grads["dW2"]
    db2 = grads["db2"]
    
    W1 = W1 - learning_rate * dW1
    b1 = b1 - learning_rate * db1
    W2 = W2 - learning_rate * dW2
    b2 = b2 - learning_rate * db2
    
     parameters = {"W1": W1,
                  "b1": b1,
                  "W2": W2,
                  "b2": b2}
    
    return parameters
    

7、定义所有的子算法

def nn_model(X,Y,n_h,num_iterations = 10000,print_cost = False)
"""
Arguments:
X -- dataset of shape(2,number of examples)
Y -- labels of shape (1,number of examples)
n_h -- size of the hidden layer
num_iteratioons -- Number of iterations in gradient descent  loop
 print_cost -- if True, print the cost every 1000 iterations
 Returns:
    parameters -- parameters learnt by the model. They can then be used to predict.

n_x = layer_sizes(X, Y)[0]
    n_y = layer_sizes(X, Y)[2]
    
parameters = initialize_parameters(n_x,n_h,n_y)
    W1 = parameters["W1"]
    b1 = parameters["b1"]
    W2 = parameters["W2"]
    b2 = parameters["b2"]
    
    for i in range(0, num_iterations):
        # Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
        A2, cache = forward_propagation(X,parameters)
        
        # Cost function. Inputs: "A2, Y, parameters". Outputs: "cost".
        cost = compute_cost(A2,Y,parameters)
 
        # Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
        grads = backward_propagation(parameters,cache,X,Y)
 
        # Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
        parameters = update_parameters(parameters,grads,learning_rate = 1.2)
        
        # Print the cost every 1000 iterations
        if print_cost and i % 1000 == 0:
            print ("Cost after iteration %i: %f" %(i, cost))

    return parameters

8、预测并查看效果

def predict(parameters,X):
"""
    Using the learned parameters, predicts a class for each example in X
    
    Arguments:
    parameters -- python dictionary containing your parameters 
    X -- input data of size (n_x, m)
    
    Returns
    predictions -- vector of predictions of our model (red: 0 / blue: 1)
    """
     A2, cache = forward_propagation(X,parameters)
    predictions = np.around(A2)
    
    return predictions

9、使用

parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)

# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
plt.title("Decision Boundary for hidden layer size " + str(4))

predictions = predict(parameters, X)
print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')

Output:
在这里插入图片描述Accuracy: 90%

猜你喜欢

转载自blog.csdn.net/qq_41380950/article/details/88635555
今日推荐