| Literature DB >> 30410432 |
Michael Pfeiffer1, Thomas Pfeil1.
Abstract
Spiking neural networks (SNNs) are inspired by information processing in biology, where sparse and asynchronous binary signals are communicated and processed in a massively parallel fashion. SNNs on neuromorphic hardware exhibit favorable properties such as low power consumption, fast inference, and event-driven information processing. This makes them interesting candidates for the efficient implementation of deep neural networks, the method of choice for many machine learning tasks. In this review, we address the opportunities that deep spiking networks offer and investigate in detail the challenges associated with training SNNs in a way that makes them competitive with conventional deep learning, but simultaneously allows for efficient mapping to hardware. A wide range of training methods for SNNs is presented, ranging from the conversion of conventional deep networks into SNNs, constrained training before conversion, spiking variants of backpropagation, and biologically motivated variants of STDP. The goal of our review is to define a categorization of SNN training methods, and summarize their advantages and drawbacks. We further discuss relationships between SNNs and binary networks, which are becoming popular for efficient digital hardware implementation. Neuromorphic hardware platforms have great potential to enable deep spiking networks in real-world applications. We compare the suitability of various neuromorphic systems that have been developed over the past years, and investigate potential use cases. Neuromorphic approaches and conventional machine learning should not be considered simply two solutions to the same classes of problems, instead it is possible to identify and exploit their task-specific advantages. Deep SNNs offer great opportunities to work with new types of event-based sensors, exploit temporal codes and local on-chip learning, and we have so far just scratched the surface of realizing these advantages in practical applications.Entities:
Keywords: binary networks; deep learning; event-based computing; neural networks; neuromorphic engineering; spiking neurons
Year: 2018 PMID: 30410432 PMCID: PMC6209684 DOI: 10.3389/fnins.2018.00774
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
Figure 1Comparison of deep spiking neural networks (SNNs) to conventional deep neural networks (DNNs). (A) Example of a deep network with two hidden layers. Here, exemplarily a fully-connected network is shown. Neurons are depicted with circles, connections with lines. (B) Time-stepped layer-by-layer computation of activations in a conventional DNN with step duration ΔT. The activation values of neurons (rectangles) are exemplarily visualized with different gray values. The output of the network, e.g. categories in the case of a classification task, is only available after all layers are completely processed. (C) Like (B), but with binarized activations. (D) The activity of a deep SNN showing a fast and asynchronous propagation of spikes through the layers of the network. (E) The membrane potential of the neuron highlighted in green in (D). When the membrane potential (green) crosses the threshold (black dashed line) a spike is emitted and the membrane potential is reset. (F) The first spike in the output layer (red arrow in D) rapidly estimates the category (assuming a classification task) of the input. The accuracy of this estimation increases over time with the occurrence of more spikes (red line and Diehl et al., 2015). In contrast, the time-stepped synchronous operation mode of DNNs results in later, but potentially more accurate classifications compared to SNNs (blue dashed line and red arrows in B,C).
This table lists built neuromorphic systems for which results with deep SNNs on classification tasks have been shown (for extended lists of hardware systems that may potentially be used for deep SNNs see, e.g., Indiveri and Liu, 2015; Liu et al., 2016).
| TrueNorth (Merolla et al., | digital | single chip: 4096 cores, 1M neurons, 256M synapses; up to 8 chips | None | Deep CNNs: (a, b) Esser et al. ( | (a) 1920 cores (b) 5 cores (c) 8 chips | (a, b) MNIST (c) CIFAR10 and many more | (a) 99.4% (b) 92.7% (c) 89.3% | (a, b) 1000 (c) 1249 | (a) 108 μJ (b) 0.268 μJ (c) 164 μJ |
| SpiNNaker (Furber et al., | digital | single chip: 18 ARM cores, approx. 1k neurons and 1k synapses per core for real-time simulations; up to 576 chips | flexible, e.g., unsupervised (Jin et al., | DBN: 2 hidden layers with 500 neurons each (Stromatias et al., | 1 chip | MNIST | 95% | 91 | 3.3 mJ |
| BrainScaleS (Schemmel et al., | mixed-signal | wafer with 384 cores, 200k neurons, 45M synapses | STDP (Schemmel et al., | MLP: 2 hidden layers with 15 neurons each (Schmitt et al., | 14 cores | downscaled MNIST | 95% | 10000 | 7.3 mJ |
Spike communication in all of these systems is asynchronous. The SpiNNaker system is the only system listed in this table that allows events with payload (see section 1). Note that none of these systems natively support batching of inputs as commonly used in conventional deep learning.