Literature DB >> 31586857

Rethinking the performance comparison between SNNS and ANNS.

Lei Deng1, Yujie Wu2, Xing Hu3, Ling Liang4, Yufei Ding5, Guoqi Li6, Guangshe Zhao7, Peng Li8, Yuan Xie9.   

Abstract

Artificial neural networks (ANNs), a popular path towards artificial intelligence, have experienced remarkable success via mature models, various benchmarks, open-source datasets, and powerful computing platforms. Spiking neural networks (SNNs), a category of promising models to mimic the neuronal dynamics of the brain, have gained much attention for brain inspired computing and been widely deployed on neuromorphic devices. However, for a long time, there are ongoing debates and skepticisms about the value of SNNs in practical applications. Except for the low power attribute benefit from the spike-driven processing, SNNs usually perform worse than ANNs especially in terms of the application accuracy. Recently, researchers attempt to address this issue by borrowing learning methodologies from ANNs, such as backpropagation, to train high-accuracy SNN models. The rapid progress in this domain continuously produces amazing results with ever-increasing network size, whose growing path seems similar to the development of deep learning. Although these ways endow SNNs the capability to approach the accuracy of ANNs, the natural superiorities of SNNs and the way to outperform ANNs are potentially lost due to the use of ANN-oriented workloads and simplistic evaluation metrics. In this paper, we take the visual recognition task as a case study to answer the questions of "what workloads are ideal for SNNs and how to evaluate SNNs makes sense". We design a series of contrast tests using different types of datasets (ANN-oriented and SNN-oriented), diverse processing models, signal conversion methods, and learning algorithms. We propose comprehensive metrics on the application accuracy and the cost of memory & compute to evaluate these models, and conduct extensive experiments. We evidence the fact that on ANN-oriented workloads, SNNs fail to beat their ANN counterparts; while on SNN-oriented workloads, SNNs can fully perform better. We further demonstrate that in SNNs there exists a trade-off between the application accuracy and the execution cost, which will be affected by the simulation time window and firing threshold. Based on these abundant analyses, we recommend the most suitable model for each scenario. To the best of our knowledge, this is the first work using systematical comparisons to explicitly reveal that the straightforward workload porting from ANNs to SNNs is unwise although many works are doing so and a comprehensive evaluation indeed matters. Finally, we highlight the urgent need to build a benchmarking framework for SNNs with broader tasks, datasets, and metrics.
Copyright © 2019 Elsevier Ltd. All rights reserved.

Entities:  

Keywords:  Artificial neural networks; Benchmark; Deep learning; Neuromorphic computing; Spiking neural networks

Mesh:

Year:  2019        PMID: 31586857     DOI: 10.1016/j.neunet.2019.09.005

Source DB:  PubMed          Journal:  Neural Netw        ISSN: 0893-6080


  6 in total

1.  Efficient Processing of Spatio-Temporal Data Streams With Spiking Neural Networks.

Authors:  Alexander Kugele; Thomas Pfeil; Michael Pfeiffer; Elisabetta Chicca
Journal:  Front Neurosci       Date:  2020-05-05       Impact factor: 4.677

2.  End-to-End Implementation of Various Hybrid Neural Networks on a Cross-Paradigm Neuromorphic Chip.

Authors:  Guanrui Wang; Songchen Ma; Yujie Wu; Jing Pei; Rong Zhao; Luping Shi
Journal:  Front Neurosci       Date:  2021-02-02       Impact factor: 4.677

3.  Is Neuromorphic MNIST Neuromorphic? Analyzing the Discriminative Power of Neuromorphic Datasets in the Time Domain.

Authors:  Laxmi R Iyer; Yansong Chua; Haizhou Li
Journal:  Front Neurosci       Date:  2021-03-25       Impact factor: 4.677

4.  ES-ImageNet: A Million Event-Stream Classification Dataset for Spiking Neural Networks.

Authors:  Yihan Lin; Wei Ding; Shaohua Qiang; Lei Deng; Guoqi Li
Journal:  Front Neurosci       Date:  2021-11-25       Impact factor: 4.677

5.  Enhancing spiking neural networks with hybrid top-down attention.

Authors:  Faqiang Liu; Rong Zhao
Journal:  Front Neurosci       Date:  2022-08-22       Impact factor: 5.152

6.  Visual explanations from spiking neural networks using inter-spike intervals.

Authors:  Youngeun Kim; Priyadarshini Panda
Journal:  Sci Rep       Date:  2021-09-24       Impact factor: 4.379

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.