Deep learning experiment-use numpy to build a simple feedforward neural network FNN

Deep learning experiment-use numpy to build a simple feedforward neural network FNN

  The notes made by learning deep learning on Tianchi simply implemented an FNN (very simple), and keep it for memo.

Logical map of experimental realization

Experiment code

# Author:JinyuZ1996
# Creation date:2020/8/24 10:28

import numpy as np

# 一个基本的5输入单元*4神经元*2输出单元的简单网络

x = np.array([0.5, 0.4, 0.3, 0.4, 0.5])  # 输入层有5个输入,[1*5](1指的是样本的维度,5指的是样本的数量,现在样本只是一维的所以可以看成是一个向量而非矩阵)
w = np.array([[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]])  # 第一层隐藏层有4个神经元所以权重应该为[5*4]
A = np.dot(x, w)  # dot函数做的就是将权重相乘后相加,相当于算出了净输入
print(A)  # 打印出来看看


def sigmoid(z):  # 定义简单的激活函数(哲理写出来的是Logistic函数)
    return 1 / (1 + np.exp(-z))


fade_1 = sigmoid(A)  # 使用激活函数激活净输入Z
print(fade_1)  # 打印出Z看看,这就是我们隐藏层神经元的值了[1*4]他将作为下一层的输入

w2 = np.array([[3, 2], [3, 2], [2, 3], [1, 1]])  # 权重应该为[4*2]
Y = np.dot(fade_1, w2)  # 计算出输出值的净输入
print(Y)  # 打印出来看看


def softmax(z):  # 二分类问题常用softmax做归一化处理(但要根据样本的维度分情况讨论,现在的输入Y是1维向量)返回的是一个概率分布的结果,值域在[0,1]之间
    if z.ndim == 2:
        z = z.T  # 转置一下
        z = z - np.max(z, axis=0)  # 减去该列向量上最大的值防止指数运算爆炸
        y = np.exp(z) / np.sum(np.exp(z), axis=0)
        return y.T
    else:
        y = np.exp(z) / np.sum(np.exp(z), axis=0)
        return y.T


print(softmax(Y))    # 输出2分类的预测值

Experimental results

 

Guess you like

Origin blog.csdn.net/qq_39381654/article/details/108196695