Design of SNN Spiking Neural Network and Izhikevich Model

Table of contents

1. Theoretical basis

2. Core program

3. Simulation conclusion


1. Theoretical basis

 

      Like the traditional artificial neural network, the spiking neural network is also divided into three topological structures. They are feed-forward spiking neural network, recurrent spiking neural network and hybrid spiking neural network.

       Learning is the core issue in the field of artificial intelligence. For SNN, the study of learning methods based on the pulse time level is necessary to verify the information processing and learning mechanism of the biological nervous system through theoretical models. By building an artificial nervous system in a biologically explainable way, scientists hope to achieve the desired goal through neuroscience and behavioral experiments. Learning in the brain can be understood as the change in the strength of synaptic connections over time, an ability known as synaptic plasticity. The learning methods of spiking neural network mainly include unsupervised learning, supervised learning and reinforcement learning.

1. Unsupervised learning algorithm

        Unsupervised learning algorithms dominate the learning of humans and animals. People can discover the inner structure of the world through observation, rather than being told the name of every objective thing. The design of unsupervised learning algorithm of artificial neural network is mainly aimed at the training of unlabeled data sets, which requires the application of unsupervised learning rules to adjust the connection weights or structure in the neural network adaptively. That is, without the supervision of a "teacher" signal, the neural network must discover regularities (such as statistical characteristics, correlations, or categories, etc.) from the input data by itself, and achieve classification or decision-making through the output. In general, unsupervised learning is meaningful only when there is redundancy in the input data set, otherwise, unsupervised learning cannot discover any patterns or features in the input data well, i.e. the redundancy provides knowledge.

       Most of the unsupervised learning algorithms of spiking neural networks are based on the unsupervised learning algorithms of traditional artificial neural networks, and are proposed on the basis of different variants of Hebb's learning rules. Research results in neuroscience have shown that the spike sequence in the biological nervous system can not only cause continuous changes in synapses, but also satisfy the spike timing-dependent plasticity (STDP) mechanism. In the decisive time window, the synaptic weights can be adjusted in an unsupervised manner by applying the STDP learning rule according to the relative timing relationship of the spike trains fired by the pre-synaptic neuron and the post-synaptic neuron.

2. Supervised Learning of Spiking Neural Networks

        The supervised learning of spiking neural network refers to finding the appropriate synaptic weight matrix of spiking neural network for given multiple input pulse sequences and multiple target pulse sequences, so that the output pulse sequence of the neuron is as close as possible to the corresponding target pulse sequence. It may be close, that is, the error evaluation function of the two is the smallest. For spiking neural networks, neural information is expressed in the form of spike sequences, and the internal state variables and error functions of neurons no longer satisfy the continuous and differentiable properties. It is very difficult to construct an effective spiking neural network supervised learning algorithm, and it is also a challenge in this field. an important research direction.

       According to the different basic ideas used in supervised learning, existing supervised learning algorithms can be divided into three categories:

       The basic idea of ​​the supervised learning algorithm based on gradient descent is to use the error between the neuron target output and the actual output and the error backpropagation process to obtain the gradient descent calculation result as a reference for synaptic weight adjustment, and finally reduce this error. error. The supervised learning algorithm based on gradient descent is a mathematical analysis method. In the derivation process of the learning rules, the state variables of the neuron model must be analytical expressions, and the linear neuron model with a fixed threshold is mainly used, such as the impulse response Model (spike response model) and Integrate-and-Fire neuron model, etc.
The basic idea of ​​the supervised learning algorithm based on synaptic plasticity is to use the synaptic plasticity mechanism caused by the temporal correlation of neuron firing pulse sequences to design learning rules for the adjustment of neuron synaptic weights, which is a biologically explainable Sexual supervised learning.
The supervised learning algorithm based on spike sequence convolution constructs the spike neural network supervised learning algorithm through the difference of the inner product of the spike sequence. The adjustment of the synaptic weight depends on the convolution calculation of a specific kernel function, which can realize the learning of the spatiotemporal pattern of the spike sequence.
3. Reinforcement Learning with Spiking Neural Networks

       Reinforcement learning is the learning of the mapping from the environment state to the behavior, so that the cumulative reward value obtained by the agent behavior from the environment is maximized. Based on the biologically inspired learning mechanism, the research focus of artificial neural network reinforcement learning is to explore the adaptive optimization strategy of agents, which is one of the main methods in the field of neural network and intelligent control in recent years. Reinforcement learning focuses on how the agent takes a series of actions in the environment. Through reinforcement learning, an agent should know what behavior to take in what state. It can be seen that the difference between reinforcement learning and supervised learning mainly lies in the following two points:

       Reinforcement learning is trial-and-error learning. Since there is no direct "teacher" guidance information, the agent must constantly interact with the environment to obtain the best strategy through trial and error; delayed rewards, reinforcement learning guidance information is very little, and
       often It is given after the fact (the last state), which leads to the problem of how to assign the report to the previous state after obtaining a positive or negative return.
4. Evolutionary method of spiking neural network
       Evolutionary algorithm (evolutionary algorithm) is a computational model that simulates the process of biological evolution. It is a global probability search algorithm based on biological evolution mechanisms such as natural selection and genetic variation, mainly including genetic algorithm (genetic algorithm) ), evolutionary programming (evolutionary programming) and evolutionary strategy (evolutionary strategy), etc. Although these algorithms have some differences in implementation, they have a common feature, that is, they all use the ideas and principles of biological evolution to solve practical problems.

        By organically combining evolutionary algorithms with spiking neural networks, researchers have opened up the research field of evolutionary spiking neural networks to improve the ability to solve complex problems. The evolutionary spiking neural network can be used as a general framework for adaptive systems, which can adaptively adjust the parameters of neurons, connection weights, network structures and learning rules without human intervention.

        The advantage of the HH model is that the description of neurons is very accurate, but the time complexity is high and it is not suitable for use in large networks. People have also proposed a simple LIF model to simulate neurons, but some properties of neurons are ignored because they are too concise. Dr. Izhikevich simplified the HH model in 2003 and proposed the Izhikevich model. The simplified model is as follows:

        Among them, v represents the membrane potential, u represents the recovery variable of the membrane potential after the pulse is emitted, I represents the input current, and a, b, c, d are all parameters, respectively representing the recovery speed of the membrane potential after the pulse is emitted, and the recovery variable u is affected by The magnitude of the effect on membrane potential, resting potential and degree of increase in recovery variables. 

2. Core program

.....................................................................................
firings=[];
nn=1;
for t=1:1000 % simulation of 1000 ms
I=[5*randn(Ne,1);2*randn(Ni,1)]; % thalamic input,rangn()产生均值为0方差为1的正态分布

fired=find(v>=30);% indices of spikes find返回的是v>=30的横坐标
firings=[firings; t+0*fired,fired];%前面的firings是把上一次循环的作为现在firing的上半部分,后面为下半部分,之所以t+0*fired是为了和fired同维
v(fired)=c(fired);
u(fired)=u(fired)+d(fired);
I=I+sum(S(:,fired),2);

disp_v(:,nn)=v;
disp_v(fired,t)=30;

v=v+0.5*(0.04*v.^2+5*v+140-u+I); % step 0.5 ms
v=v+0.5*(0.04*v.^2+5*v+140-u+I); % for numerical
u=u+a.*(b.*v-u); % stability

disp_t(nn)=t;
nn=nn+1;
end

figure(1)
subplot(211)
plot(disp_v(1,:));
subplot(212)
plot(firings(:,1),firings(:,2),'.');
UP147

3. Simulation conclusion

 

Guess you like

Origin blog.csdn.net/ccsss22/article/details/130251841