python神经网络基础

   随笔来自《Python神经网络编程》笔记,由简到繁逐步构建python神经网络。

  先是大概框架:  1.初始化函数--设置输入、隐藏、输出节点的配置。

            2.训练--学习数据,修正权重等

              3.预测--输入数据,返回结果

框架代码如下:

1 class NeuralNetwork:
2     def __init__(self):
3         pass
4     def train(self):
5         pass
6     def query(self):
7         pass

1.初始化:

  输入、隐藏、输出节点的数目决定网络的形状。因此初始化函数如下:

1     def __init__(self,inputnodes,hidnodes,outnodes,lrate):
2         self.inputnodes = inputnodes
3         self.hidnodes = hidnodes
4         self.outnodes = outnodes
5         self.lr = lrate

2.权重:

  随机初始化权重,服从正态分布。

        self.wih = np.random.normal(0.0,pow(self.hidnodes,-0.5),
                                    (self.hidnodes,self.inputnodes))
        self.who = np.random.normal(0.0,pow(self.outnodes,-0.5),
                                    (self.outnodes,self.hidnodes))

3.预测:

  输入信号的加权求和使用矩阵方便很多。

h_input = numpy.dot(self.win,inputs)

之后使用激活函数:Output = Sigmoid(X)


初始化激活函数:

 self.activation = lambda x:scipy.special.expit(x)

正向传播过程代码

    def query(self,input_list):
        inputs = np.array(input_list,ndmin=2).T
        h_input = np.dot(self.wih,inputs)
        h_output = self.activation(h_input)
        f_input = np.dot(self.who,h_output)
        f_output = self.activation(f_input)
        return f_output

简单测试:

n = NeuralNetwork(3,3,3,0.3)
b = n.query([1.0,0.5,-1.5])
print(b)

其中权重是随机化的,因此可以测试前面过程是否正确:

[[0.63677225]
 [0.80971638]
 [0.56987426]]

由于权重随机,结果可能不一样。

目前代码如下:

 1 import numpy as np
 2 import scipy.special
 3 
 4 class NeuralNetwork:
 5     def __init__(self,inputnodes,hidnodes,outnodes,lrate):
 6         self.inputnodes = inputnodes
 7         self.hidnodes = hidnodes
 8         self.outnodes = outnodes
 9         self.lr = lrate
10         self.wih = np.random.normal(0.0,pow(self.hidnodes,-0.5),
11                                     (self.hidnodes,self.inputnodes))
12         self.who = np.random.normal(0.0,pow(self.outnodes,-0.5),
13                                     (self.outnodes,self.hidnodes))
14         self.activation = lambda x:scipy.special.expit(x)
15 
16     def train(self):
17         pass
18     def query(self,input_list):
19         inputs = np.array(input_list,ndmin=2).T
20         h_input = np.dot(self.wih,inputs)
21         h_output = self.activation(h_input)
22         f_input = np.dot(self.who,h_output)
23         f_output = self.activation(f_input)
24         return f_output
25 
26 n = NeuralNetwork(3,3,3,0.3)
27 b = n.query([1.0,0.5,-1.5])
28 print(b)
View Code

训练函数的编写:

  有两个任务:1.前向计算输出,功能如预测一样。2.后向根据误差修改权重。

  1.前向:在predic()的基础上增加了label项:

1     def train(self,input_list,labels_list):
2         inputs = np.array(input_list,ndmin=2).T
3         labels = np.array(labels_list,ndmin=2).T
4         h_input = np.dot(self.wih, inputs)
5         h_output = self.activation(h_input)
6         f_input = np.dot(self.who, h_output)
7         f_output = self.activation(f_input)
View Code

输出误差:

o_error = labels - f_output

隐藏层误差:

h_error = np.dot(self.who.T,o_error)

权重优化:

1         self.who += self.lr*np.dot((o_error*f_output*(1.0-f_output)),np.transpose(h_output))
2         self.wih += self.lr*np.dot((h_error*h_output*(1.0-h_output)),np.transpose(inputs))
3         

完整代码:

 1 import numpy as np
 2 import scipy.special
 3 
 4 class NeuralNetwork:
 5     def __init__(self,inputnodes,hidnodes,outnodes,lrate):
 6         self.inputnodes = inputnodes
 7         self.hidnodes = hidnodes
 8         self.outnodes = outnodes
 9         self.lr = lrate
10         self.wih = np.random.normal(0.0,pow(self.hidnodes,-0.5),
11                                     (self.hidnodes,self.inputnodes))
12         self.who = np.random.normal(0.0,pow(self.outnodes,-0.5),
13                                     (self.outnodes,self.hidnodes))
14         self.activation = lambda x:scipy.special.expit(x)
15 
16     def train(self,input_list,labels_list):
17         inputs = np.array(input_list,ndmin=2).T
18         labels = np.array(labels_list,ndmin=2).T
19         h_input = np.dot(self.wih, inputs)
20         h_output = self.activation(h_input)
21         f_input = np.dot(self.who, h_output)
22         f_output = self.activation(f_input)
23 
24         o_error = labels - f_output
25         h_error = np.dot(self.who.T,o_error)
26         self.who += self.lr*np.dot((o_error*f_output*(1.0-f_output)),np.transpose(h_output))
27         self.wih += self.lr*np.dot((h_error*h_output*(1.0-h_output)),np.transpose(inputs))
28     def query(self,input_list):
29         inputs = np.array(input_list,ndmin=2).T
30         h_input = np.dot(self.wih,inputs)
31         h_output = self.activation(h_input)
32         f_input = np.dot(self.who,h_output)
33         f_output = self.activation(f_input)
34         return f_output
35 
36 # n = NeuralNetwork(3,3,3,0.3)
37 # b = n.query([1.0,0.5,-1.5])
38 # print(b)
View Code

猜你喜欢

转载自www.cnblogs.com/Time-machine/p/9256083.html