Brian2_Spiking Neural Network_Neuron Learning Record

The spiking neural network is called the third-generation neural network, and has higher biological reliability. In the brain-like science research that has emerged in recent years, SNN has always occupied a core position. And in the case of similar performance, chips based on spiking neural networks have lower power consumption than artificial neural networks, and are more stable and robust.
Installed Brian2 through pip, and ran some programs provided on the official website, and also attached some of my own learning experience.

from brian2 import *
start_scope()
#start_scope()函数确保在调用该函数之前创建的任何 
#Brian对象都不会包含在下一次模拟运行中。
tau = 10*ms
eqs ='dv/dt = (1-v)/tau : 1'
G = NeuronGroup(1, eqs,method='exact')
print('Before v = %s' % G.v[0])
run(100*ms)
print('After v = %s' % G.v[0])
start_scope()
G = NeuronGroup(1, eqs, method='exact')
M = StateMonitor(G, 'v', record=0)
run(30*ms)
plot(M.t/ms, M.v[0], 'C0', label='Brian')
plot(M.t/ms, 1-exp(-M.t/tau), 'C1--',label='Analytic')
xlabel('Time (ms)')
ylabel('v')
legend()
show()

Please add a picture description

Pay attention to the dimension in the program. For example, adding quantities in different units, and the program will report an error if the left and right dimensions of the equation are inconsistent. If eqs ='dv/dt = (1-v)/tau : 1'it is removed in , /tauthere will be dimension inconsistency and an error will be reported.

In this program, the create neuron group function NeuronGroup()creates a dv/dt = (1-v)/tau : 1neuron with a model defined by a differential equation, and we use StateMonitorthe function to monitor the state of the neuron. The first two arguments are the group to log and the variable to log. We also specify record=0. which means we record all values ​​of neuron 0. We have to specify which neurons we want to record, because in large simulations with many neurons, it usually takes too much RAM to record the values ​​of all neurons.

The blue solid line is the change of the neuron v, and the orange dotted line is the function curve of dv/dt = (1-v)/tau : 1the analytical solution of the differential equation 1-exp(-t/tau), and the two are coincident.

start_scope()
tau = 10*ms
eqs = '''
dv/dt = (1-v)/tau : 1
'''
G = NeuronGroup(1, eqs, threshold='v>0.8', reset='v = 0', method='exact')
M = StateMonitor(G, 'v', record=0)
run(50*ms)
plot(M.t/ms, M.v[0])
xlabel('Time (ms)')
ylabel('v')
show()

Please add a picture description

We NeuronGroupadded two new keywords to the declaration: threshold='v>0.8'and reset='v = 0'. This means when we fire the spike, and reset immediately after the spike.

from brian2 import *
start_scope()
tau = 10*ms
eqs ='dv/dt = (1-v)/tau : 1'
G = NeuronGroup(1, eqs, threshold='v>0.8', reset='v = 0', method='exact')
statemon = StateMonitor(G, 'v', record=0)
spikemon = SpikeMonitor(G)
run(50*ms)
print('Spike times: %s' % spikemon.t[:])
plot(statemon.t/ms, statemon.v[0])
for t in spikemon.t:
    axvline(t/ms, ls='--', c='C1', lw=3)
xlabel('Time (ms)')
ylabel('v')
show()

Please add a picture description

Pulse time:Spike times: [16. 32.1 48.2] ms

A common feature of neuronal models is the refractory period. This means that after a neuron fires a spike, it becomes stubborn for a period of time and cannot fire another spike until that period is over. Here's how we do it in Brian.

from brian2 import *
start_scope()
tau = 10*ms
eqs ='dv/dt = (1-v)/tau : 1'
start_scope()
tau = 10*ms
eqs = '''
dv/dt = (1-v)/tau : 1 (unless refractory)
'''
G = NeuronGroup(1, eqs, threshold='v>0.8', reset='v = 0', refractory=5*ms, method='exact')
statemon = StateMonitor(G, 'v', record=0)
spikemon = SpikeMonitor(G)
run(50*ms)
plot(statemon.t/ms, statemon.v[0])
for t in spikemon.t:
    axvline(t/ms, ls='--', c='C1', lw=3)
xlabel('Time (ms)')
ylabel('v')
show()

Please add a picture description

Although we NeuronGroupadd parameters in the function refractory=5*ms, we still have to add them in the differential equation (unless refractory), otherwise the behavior of the neuron will not produce a refractory period.

from brian2 import *
start_scope()
tau = 5*ms
eqs = '''
dv/dt = (1-v)/tau : 1
'''
G = NeuronGroup(1, eqs, threshold='v>0.8', reset='v = 0', refractory=15*ms, method='exact')
statemon = StateMonitor(G, 'v', record=0)
spikemon = SpikeMonitor(G)
run(50*ms)
plot(statemon.t/ms, statemon.v[0])
for t in spikemon.t:
    axvline(t/ms, ls='--', c='C1', lw=3)
axhline(0.8, ls=':', c='C2', lw=3)
xlabel('Time (ms)')
ylabel('v')
print("Spike times: %s" % spikemon.t[:])
show()

Please add a picture description

So what's going on here? The behavior of the first spike is the same: vup to 0.8, then the neuron fires the spike for 0 to 8 milliseconds, then immediately resets to 0. Since the refractory period is now 15 milliseconds, this means that the neuron will not be able to spike again until time 8 + 15 = 23 milliseconds. Immediately after the first spike, vthe value of starts to rise, because we did not NeuronGroupspecify in the definition of (unless refractory), at which point it can reach a value of 0.8 (dashed green line) in about 8 ms, but because the neuron is now in a state where it should not period, so it will not be reset after reaching the threshold but will continue to rise until the end of the refractory period and the potential will not be reset.

Multiple neurons are demonstrated below.

from brian2 import *
start_scope()
N = 100
tau = 10*ms
eqs = '''
dv/dt = (2-v)/tau : 1
'''
G = NeuronGroup(N, eqs, threshold='v>1', reset='v=0', method='exact')
G.v = 'rand()'
spikemon = SpikeMonitor(G)
run(50*ms)
plot(spikemon.t/ms, spikemon.i, '.k')
xlabel('Time (ms)')
ylabel('Neuron index')
show()

Please add a picture description

The variable Nto determine the number of neurons G.v = 'rand()'is to initialize each neuron with a different random value between 0 and 1, just to make each neuron do something different. spikemon.tIndicates neuron spike times, spikemon.igiving each spike the corresponding neuron index value. This is the standard "raster image" used in neuroscience.

from brian2 import *
start_scope()
N = 100
tau = 10*ms
v0_max = 3.
duration = 1000*ms
eqs = '''
dv/dt = (v0-v)/tau : 1 (unless refractory)
v0 : 1
'''
G = NeuronGroup(N, eqs, threshold='v>1', reset='v=0', refractory=5*ms, method='exact')
M = SpikeMonitor(G)
G.v0 = 'i*v0_max/(N-1)'
run(duration)
figure(figsize=(12,4))
subplot(121)
plot(M.t/ms, M.i, '.k')
xlabel('Time (ms)')
ylabel('Neuron index')
subplot(122)
plot(G.v0, M.count/duration)
xlabel('v0')
ylabel('Firing rate (sp/s)')
show()

Please add a picture description

In this example, we're v0driving the neuron exponentially to that value, but v>1then , it fires a spike and resets. The result is that the rate at which it emits spikes will v0be related to the value of . Because v0<1it never emits spikes, and as v0it gets larger it emits spikes at a higher rate. The plot on the right shows v0the firing rate of the function of . Here is the If curve for this neuron model.

Note that in the figure we used SpikeMonitorthe variable in count: this is an array of the number of spikes fired by each neuron in the group. Divide this by the duration of the run to get the firing rate.

from brian2 import *
start_scope()
N = 100
tau = 10*ms
v0_max = 3.
duration = 1000*ms
sigma = 0.2
eqs = '''
dv/dt = (v0-v)/tau+sigma*xi*tau**-0.5 : 1 (unless refractory)
v0 : 1
'''
G = NeuronGroup(N, eqs, threshold='v>1', reset='v=0', refractory=5*ms, method='euler')
M = SpikeMonitor(G)
G.v0 = 'i*v0_max/(N-1)'
run(duration)
figure(figsize=(12,4))
subplot(121)
plot(M.t/ms, M.i, '.k')
xlabel('Time (ms)')
ylabel('Neuron index')
subplot(122)
plot(G.v0, M.count/duration)
xlabel('v0')
ylabel('Firing rate (sp/s)')
show()

Often when making models of neurons, we include a random element to model the effect of various forms of neural noise. In Brian, we can do this by using the symbol xi in differential equations. Strictly speaking, this symbol is a “stochastic differential(随机微分)” but you can sort of thinking of it as just a Gaussian random variable with mean 0 and standard deviation 1. We do have to take into account the way stochastic differentials scale with time, which is why we multiply it by tau**-0.5 in the equations below (see a textbook on stochastic differential equations for more details). Note that we also changed the method keyword argument to use 'euler' (which stands for the Euler-Maruyama method); the 'exact'method that we used earlier is not applicable to stochastic differential equations.

I didn’t understand this section of the program, so I attached the original text. After reading some literature, I found that compared with the previous piece of code, some noise was added to the neural network, which made the firing rate increase in an S-shaped manner. The following is the running result .

Please add a picture description

Guess you like

Origin blog.csdn.net/cyy0789/article/details/120338403