LIF model and its variants Training Spiking Deep Networks for Neuromorphic Hardware

Standard LIF and softened LIF

We want to calculate the computational efficiency of the original network
. There are two main sources of calculation in the image: computational neurons and computational connections.
Floating-point operations per second (floating-point operations per second)
The calculation of synapses consumes most of the energy

These methods provide a new way to translate traditional artificial neural networks into spike-based neuromorphic hardware. We provide some evidence that this implementation is more energy efficient than the ANN implementation. Although our analysis only considered static image classification, we expect that the actual efficiency of SNN will become apparent when processing dynamic input (such as video). This is because SNN is dynamic in nature and requires many simulation steps to process each image. This makes them most suitable for processing dynamic sequences, in which adjacent frames in the video sequence are similar to each other, and the network does not have to spend time constantly "resetting" after a sudden change in input.

Guess you like

Origin blog.csdn.net/huatianxue/article/details/111941046