Neural network - Python implementation of Hopfield neural network algorithm (theory + example + program)

1. Hopfield neural network

The Hopfield neural network is a recurrent neural network invented by John Hopfield in 1982. Hopfield network is a kind of neural network combining memory system and binary system. It guarantees convergence to a local minimum, but convergence to the wrong local minimum instead of the global minimum may also occur. Hopfield networks also provide a model for simulating human memory. [1]

 

Update the value of a node in Hopfield using the following formula:

formula:

wji is the weight of node j to node i.
si Value of node i (state s).
The threshold of θi node i, usually 0.
Hopfield updates in two ways:

Asynchronous : One node per update. This node can be selected randomly or in a preset order.
Synchronous : Update the state of all nodes at the same time. This type of update requires a central clock in the system to maintain synchronization. This approach is considered impractical since in biological or physical systems there is often no central clock (to keep a synchronized state, i.e. each node is often acting on its own.)

The basic characteristics of the Hopfield network :
1. Only a single layer
2. The neuron nodes are fully connected
3. Only input, no output

 2. Python implements Hopfield neural network algorithm

 1. For two memory patterns (1, 1, 1, 1) and (-1, -1, -1, -1) (this is an example of non-orthogonal memory pattern vectors), design network weights?

 The result of the operation is as follows:

 

import numpy as np
import numpy.matlib 
import random

X=np.array([[1,1,1,1,1,1,1,1,1,-1,1,1],[1,1,1,1,-1,-1,-1,-1,-1,1,1,1]])#前四个和后四个为伪吸引子,为了满足P<0.25n的关系,整体两组模式正交

def makematrix(m, n, fill=0.0):
    a = []
    for i in range(m):
        a.append([fill]*n)
    return np.array(a)
#X=makematrix(4,2)
#X=[[1,1,1,1],[-1,-1,-1,-1]]
class Hopfield:
    def __init__(self, num_in):
        self.num_in=num_in
        self.wight=makematrix(num_in, num_in)
    def determine_wight(self,inputs):
        for x in inputs:
            self.wight+=np.dot(x.reshape(-1,1),x.reshape(1,-1))
        return self.wight-self.wight[2][2]*numpy.matlib.identity(self.num_in)
h=Hopfield(12)
print("网络权值:\n",h.determine_wight(X))

2. Now take (1, 1, 1, 1), (-1, -1, -1, -1) as the input of the network respectively, and judge whether it is a stable state of the network? For example, take (1, 1, -1, 1), ... respectively as the input of the network, and calculate its final convergent state?

 The result of the operation is as follows:

 

import numpy as np
import numpy.matlib 
import random

X=np.array([[1,1,1,1,1,1,1,1,1,-1,1,1],[1,1,1,1,-1,-1,-1,-1,-1,1,1,1]])#前四个和后四个为伪吸引子,为了满足P<0.25n的关系,整体两组模式正交

def makematrix(m, n, fill=0.0):
    a = []
    for i in range(m):
        a.append([fill]*n)
    return np.array(a)

def sgn(v):
    if v>0:
        return 1
    else:
        return -1  
        
#X=makematrix(4,2)
#X=[[1,1,1,1],[-1,-1,-1,-1]]
class Hopfield:
    def __init__(self, num_in,num_p):   #输入维数、模式样本数
        self.num_in=num_in
        self.num_p=num_p
        self.wight=makematrix(num_in, num_in)
        self.t=numpy.matlib.rand(self.num_in,1)#误差
    def determine_wight(self,inputs):
        for x in inputs:
            self.wight+=np.dot(x.reshape(-1,1),x.reshape(1,-1))
        self.wight=self.wight-self.wight[2][2]*numpy.matlib.identity(self.num_in)
        return self.wight
    def fun(self,inputs):
        net=np.dot(self.wight,inputs.T)-self.t
        return np.sign(net.T)
    def train(self,inputs,item=100):
        temp=inputs
        for i in range(item):
            temp=self.fun(temp)
            #print(temp)
        return temp
    def test(self,t):
        return self.fun(t)


t=np.array([[1,1,1,1,1,1,1,1,1,-1,1,1],[1,1,1,1,-1,-1,-1,-1,1,-1,1,1]])
t1=np.array([[1,1,1,1,
1,1,-1,1,
1,-1,1,1],
[1,1,1,1,
-1,1,1,1,
1,-1,1,1],
[1,1,1,1,
1,-1,-1,1,
1,-1,1,1]])#伪吸引子的部分不是随机生成的[1,1,-1,1],[1,-1,-1,1][-1,1,1,1][1,1,1,-1]...
h=Hopfield(12,2)
print("网络权值:\n",h.determine_wight(X))
print("训练\n",h.train(X)[:,4:8])
a1=h.train(t)
print("现将(1, 1, 1, 1)、(-1, -1, -1, -1) \n分别作为网络的输入的输出:\n",a1[:,4:8])
a2=h.train(t1)
print("测试2:\n",a2[:,4:8])

Source: Xiao Ling の Blog—Good Times|A bad blog

[1] Wikipedia

  * Whoever writes the comparison, please forgive me if there is a problem with the code!

Guess you like

Origin blog.csdn.net/Linyun2tt/article/details/129900495