自作のニューラルネットワーク

NPとしてnumpyのインポート

デフシグモイド(X):
1.0 /戻る(1+ np.expを(-x))

デフ(x)はsigmoid_derivative:
リターンのx *(1.0 - X)

クラスニューラルネットワーク:
DEF INIT(自己、X、Y):
self.input = X
self.weights1 = np.random.rand(self.input.shape [1]、4)
self.weights2 = np.random.rand(4 、1)
self.y = Yの
self.output = np.zeros(self.y.shape)

def feedforward(self):
    self.layer1 = sigmoid(np.dot(self.input, self.weights1))
    self.output = sigmoid(np.dot(self.layer1, self.weights2))

def backprop(self):
    # application of the chain rule to find derivative of the loss function with respect to weights2 and weights1
    d_weights2 = np.dot(self.layer1.T, (2*(self.y - self.output) * sigmoid_derivative(self.output)))
    d_weights1 = np.dot(self.input.T,  (np.dot(2*(self.y - self.output) * sigmoid_derivative(self.output), self.weights2.T) * sigmoid_derivative(self.layer1)))

    # update the weights with the derivative (slope) of the loss function
    self.weights1 += d_weights1
    self.weights2 += d_weights2

もし名前 == " ":
X = np.array([0,0,1]、
[0,1,1]、
[1,0,1]、
[1,1,1]])
Y = np.array([[0]、[1]、[1]、[0]])NN
=ニューラルネットワーク(X、Y)

for i in range(1500):
    nn.feedforward()
    nn.backprop()

print(nn.output)

おすすめ

転載: www.cnblogs.com/gshang/p/10988735.html