基于深层神经网络的“喵”图识别

本文是基于吴恩达老师的《深度学习》第一课第四周课后题所做,目的是使用深层神经网络对小喵图片进行识别。该算法的实现共分:模型搭建、模型测试、结果分析模型应用四个步骤,详述如下。本文所用辅助程序在这里

一、模型搭建

数据集与《基于逻辑回归算法的“喵”图识别》一致,可以参考该文章“数据处理”部分。

train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
m_train = train_set_x_orig.shape[0]
m_test = test_set_x_orig.shape[0]
num_px = train_set_x_orig[0].shape[0]

train_set_x_flatten = train_set_x_orig.reshape(m_train,-1).T
test_set_x_flatten = test_set_x_orig.reshape(m_test,-1).T

train_set_x = train_set_x_flatten / 255
test_set_x = test_set_x_flatten / 255

程序所用的第三方库如下,其中lr_utils和dnn_utils_v2是吴恩达老师给出的辅助文档,其中分别包含数据集和sigmoid与relu函数。文件链接:

import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
from dnn_utils_v2 import sigmoid, sigmoid_backward,relu,relu_backward

按照深层神经网络算法的思路,我们可将算法划分成以下七个步骤来实现,在实现一步程序后可以通过吴恩达老师给出的辅助程序生成的参数进行验证。

1.确定网络深度及各层神经元个数。吴恩达老师在视频中讲解深层神经网络各层神经元个数逐层减少,类似一个从局部到整体的过程,而且网络层数L和各层神经元个数n[l],属于需要调节的超参数,可结合具体的业务需求进行调整,本文选用的网络结构如下:

layers_dims = [12288, 20, 7, 5, 1]

2. 参数初始化。对于L层深层神经网络,我们可以按照下表来核对初始化的各参数的维度。

def initialize_parameters_deep(layer_dims):
    np.random.seed(3)
    parameters = {}
    L = len(layer_dims)

    for l in range(1,L):
        parameters['W'+str(l)] = np.random.randn(layer_dims[l],layer_dims[l-1]) / np.sqrt(layer_dims[l-1]) #* 0.01
        parameters['b'+str(l)] = np.zeros((layer_dims[l],1))

        assert(parameters['W'+str(l)].shape == (layer_dims[l],layer_dims[l-1]))
        assert(parameters['b'+str(l)].shape == (layer_dims[l],1))

    return parameters

3.前向传播。前向传播过程在中间层需经过两步计算:Z = np.dot(W,X) + b, A = g(Z)。前向计算中的A_prev, W, b, Z需要返回以便后续计算调用。

def linear_forward(A,W,b):

    Z = np.dot(W,A) + b

    assert(Z.shape == (W.shape[0],A.shape[1]))
    cache = (A, W, b)

    return Z, cache


def linear_activation_forward(A_prev,W,b,activation):
    if activation == "sigmoid":
        Z, linear_cache = linear_forward(A_prev,W,b)
        A, activation_cache = sigmoid(Z)

    elif activation == "relu":
        Z, linear_cache = linear_forward(A_prev,W,b)
        A, activation_cache = relu(Z)
        
    assert(A.shape == (W.shape[0],A_prev.shape[1]))
    cache = (linear_cache,activation_cache)

    return A,cache

L层前向传播函数

def L_model_forward(X,parameters):
    caches = []
    A = X
    L = len(parameters) // 2

    for l in range(1,L):
        A_prev =A
        A, cache = linear_activation_forward(A_prev,parameters['W'+str(l)],
                                             parameters['b'+str(l)],"relu")
        
        caches.append(cache)

    AL,cache = linear_activation_forward(A,parameters['W'+str(L)],
                                         parameters['b'+str(L)],"sigmoid")
    caches.append(cache)

    assert(AL.shape == (1,X.shape[1]))

    return AL, caches

4. 计算损失函数cost。计算公式为:

def compute_cost(AL,Y):

    m = Y.shape[1]
    logprob = np.multiply(np.log(AL),Y) + np.multiply(np.log(1-AL),1-Y)
    cost = -np.sum(logprob) / m
    cost = np.squeeze(cost)
    assert(cost.shape == ())

    return cost

5.反向传播。这一步骤是算法的重中之重,其计算也包含两个部分:dZ[l] = dA[l] * g'(Z[l]) 和通过dZ计算(dW[l],db[l],dA[l-1])公式如下:


def linear_backward(dZ, cache):

    A_prev, W, b = cache
    m = A_prev.shape[1]

    dW = np.dot(dZ, A_prev.T) / m
    db = np.sum(dZ, axis=1, keepdims=True) / m
    dA_prev = np.dot(W.T, dZ)

    assert(dW.shape == W.shape)
    assert(db.shape == b.shape)
    assert(dA_prev.shape == A_prev.shape)

    return dA_prev, dW, db

def linear_activation_backward(dA, cache, activation):

    linear_cache, activation_cache = cache

    if activation == "relu":

        dZ = relu_backward(dA, activation_cache)
        dA_prev, dW, db = linear_backward(dZ, linear_cache)

    elif activation == "sigmoid":
        dZ = sigmoid_backward(dA, activation_cache)
        dA_prev, dW, db = linear_backward(dZ, linear_cache)

    return dA_prev, dW, db

L层反向传播函数

def L_model_backward(AL, Y, caches):
    grads = {}
    L = len(caches)
    m = AL.shape[1]
    Y = Y.reshape(AL.shape)

    dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))

    current_cache = caches[L-1]
    grads['dA' + str(L-1)],grads['dW'+str(L)],grads['db'+str(L)] = linear_activation_backward(dAL, current_cache, "sigmoid")
    
    
    for l in reversed(range(L-1)):
        current_cache =  caches[l]
        dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads['dA'+str(l+1)],current_cache,"relu")
        grads['dA' + str(l )] = dA_prev_temp
        grads['dW' + str(l + 1)] = dW_temp
        grads['db' + str(l + 1)] = db_temp

    return grads

在该课程习题答案中L_model_backward()中有一处细节上的错误,即grads[dAL]中存储的值实际上是dA[L-1]的值,在本文的代码中笔者使用红体字标出了不一致的地方。

6.更新参数。

def update_parameters(parameters, grads, learning_rate):

    L = len(parameters) // 2

    for l in range(L):
        parameters["W" + str(l+1)] -= learning_rate * grads["dW" + str(l+1)]
        parameters["b" + str(l+1)] -= learning_rate * grads["db" + str(l+1)]


    return parameters

7.构建神经网络模型。

将上述函数进行整合,并通过反复迭代步骤3-6,最终生成适用于神经网络模型的参数。整个过程可参考下图:


def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost = False):

    np.random.seed(1)
    costs = []

    parameters = initialize_parameters_deep(layers_dims)

    for i in range(0,num_iterations):
        AL, caches = L_model_forward(X,parameters)

        cost = compute_cost(AL,Y)

        grads = L_model_backward(AL, Y, caches)

        parameters = update_parameters(parameters, grads, learning_rate)

        if print_cost and i % 100 == 0:
            print("Cost after iteration %i: %f" %(i,cost))
        if print_cost and i % 100 == 0:
            costs.append(cost)
            
    plt.plot(np.squeeze(costs))
    plt.ylabel('cost')
    plt.xlabel('iterations(per tens)')
    plt.title('Learning rate =' + str(learning_rate))
    plt.show()

    return parameters

二、模型测试

parameters = L_layer_model(train_set_x, train_set_y, layers_dims, learning_rate = 0.0075, num_iterations = 2000, print_cost =True)
Cost after iteration 0: 0.715732
Cost after iteration 100: 0.674738
Cost after iteration 200: 0.660337
Cost after iteration 300: 0.646289
Cost after iteration 400: 0.629813
Cost after iteration 500: 0.606006
Cost after iteration 600: 0.569004
Cost after iteration 700: 0.519797
Cost after iteration 800: 0.464157
Cost after iteration 900: 0.408420
Cost after iteration 1000: 0.373155
Cost after iteration 1100: 0.305724
Cost after iteration 1200: 0.268102
Cost after iteration 1300: 0.238725
Cost after iteration 1400: 0.206323
Cost after iteration 1500: 0.179439
Cost after iteration 1600: 0.157987
Cost after iteration 1700: 0.142404
Cost after iteration 1800: 0.128652
Cost after iteration 1900: 0.112443
Cost after iteration 2000: 0.085056


经过2000次的迭代生成一组cost值为0.085056的parameters,将这组参数作为深度神经网络的参数,在测试集上进行预测,预测函数如下:

def predict(X, y, parameters):
    m = X.shape[1]
    n = len(parameters) // 2
    p = np.zeros((1,m))

    probas, caches = L_model_forward(X,parameters)
    for i in range(0,probas.shape[1]):
        if probas[0,i] > 0.5:
            p[0,i] = 1
        else:
            p[0,i] = 0
    print("Accuracy: "  + str(np.sum((p == y)/m)))
    return p
train set Accuracy: 0.9952153110047844
test set Accuracy: 0.78
在训练集上的预测精度达到了99.5%,在测试集上的进度为78%,有些过拟合。

三、结果分析

《基于逻辑回归算法的“喵”图识别》中使用同样的数据进行预测,采用深度神经网络算法的预测精度更高,但由于本文算法存在过拟合的问题,对于测试集的预测还需要通过参数调整等方法来进一步优化。

我们可以通过分析测试集中未被正确识别的图片的特征找到优化算法的思路。

ef print_mislabeled_image(classes, X, y, p):
    a = p + y

    mislabeled_indices = np.asarray(np.where(a = 1))
    plt.rcParams['figure.figsize'] = (40.0, 40.0)
    num_images = len(mislabeled_indices[0])
    for i in range(num_images):
        index = mislabeled_indices[1][i]

        plt.subplot(2, num_images, i+1)
        plt.imshow(X[:,index].reshape(64,64,3),interpolation = 'nearest')
        plt.axis('off')
        plt.title("Prediction: " + classes[int(p[0,index])].decode("utf-8") + " \n Class: " + classes[y[0,index]].decode("utf-8"))
    

此处只选择比较有代表性的四张图片进行说明:

1)因为亮度太高且背景对比度过高

2)猫与背景颜色太接近

3)拍摄的角度有些扭曲

4)拍摄集中在局部

四、模型应用

最后可以自己找一张图片,测试下我们的搭建的模型。需要将图片保存在images文件夹下(本文所用编辑器为IDLE),然后进行一些处理转化为规定格式。


my_image = "cat_image.jpg"
y = [1]
fname = "images\\" + my_image

image = np.array(ndimage.imread(fname,flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1,num_px ** 2 * 3)).T

进行预测

my_predicted_image = predict(my_image, y, parameters)

plt.imshow(image)
plt.show()
print('y='+str(np.squeeze(my_predicted_image))+",you predict that it is a \""+
        classes[int(np.squeeze(my_predicted_image)),].decode('utf-8')+"\"picture.")
打印结果
y=1.0,you predict that it is a "cat"picture.


猜你喜欢

转载自blog.csdn.net/u013093426/article/details/80885730
今日推荐