Literature DB >> 29875487

Equivalent-accuracy accelerated neural-network training using analogue memory.

Stefano Ambrogio1, Pritish Narayanan1, Hsinyu Tsai1, Robert M Shelby1, Irem Boybat2,3, Carmelo di Nolfo1,3, Severin Sidler1,3, Massimo Giordano1, Martina Bodini1,3, Nathan C P Farinha1, Benjamin Killeen1, Christina Cheng1, Yassine Jaoudi1, Geoffrey W Burr4.   

Abstract

Neural-network training can be slow and energy intensive, owing to the need to transfer the weight data for the network between conventional digital memory chips and processor chips. Analogue non-volatile memory can accelerate the neural-network training algorithm known as backpropagation by performing parallelized multiply-accumulate operations in the analogue domain at the location of the weight data. However, the classification accuracies of such in situ training using non-volatile-memory hardware have generally been less than those of software-based training, owing to insufficient dynamic range and excessive weight-update asymmetry. Here we demonstrate mixed hardware-software neural-network implementations that involve up to 204,900 synapses and that combine long-term storage in phase-change memory, near-linear updates of volatile capacitors and weight-data transfer with 'polarity inversion' to cancel out inherent device-to-device variations. We achieve generalization accuracies (on previously unseen data) equivalent to those of software-based training on various commonly used machine-learning test datasets (MNIST, MNIST-backrand, CIFAR-10 and CIFAR-100). The computational energy efficiency of 28,065 billion operations per second per watt and throughput per area of 3.6 trillion operations per second per square millimetre that we calculate for our implementation exceed those of today's graphical processing units by two orders of magnitude. This work provides a path towards hardware accelerators that are both fast and energy efficient, particularly on fully connected neural-network layers.

Entities:  

Year:  2018        PMID: 29875487     DOI: 10.1038/s41586-018-0180-5

Source DB:  PubMed          Journal:  Nature        ISSN: 0028-0836            Impact factor:   49.962


  47 in total

1.  RAPA-ConvNets: Modified Convolutional Networks for Accelerated Training on Architectures With Analog Arrays.

Authors:  Malte J Rasch; Tayfun Gokmen; Mattia Rigotti; Wilfried Haensch
Journal:  Front Neurosci       Date:  2019-07-30       Impact factor: 4.677

Review 2.  Optical Computing: Status and Perspectives.

Authors:  Nikolay L Kazanskiy; Muhammad A Butt; Svetlana N Khonina
Journal:  Nanomaterials (Basel)       Date:  2022-06-24       Impact factor: 5.719

3.  Dynamical stochastic simulation of complex electrical behavior in neuromorphic networks of metallic nanojunctions.

Authors:  F Mambretti; M Mirigliano; E Tentori; N Pedrani; G Martini; P Milani; D E Galli
Journal:  Sci Rep       Date:  2022-07-18       Impact factor: 4.996

4.  Neuromorphic object localization using resistive memories and ultrasonic transducers.

Authors:  Filippo Moro; Emmanuel Hardy; Bruno Fain; Thomas Dalgaty; Paul Clémençon; Alessio De Prà; Eduardo Esmanhotto; Niccolò Castellani; François Blard; François Gardien; Thomas Mesquida; François Rummens; David Esseni; Jérôme Casas; Giacomo Indiveri; Melika Payvand; Elisa Vianello
Journal:  Nat Commun       Date:  2022-06-18       Impact factor: 17.694

5.  Optimised weight programming for analogue memory-based deep neural networks.

Authors:  Charles Mackin; Malte J Rasch; An Chen; Jonathan Timcheck; Robert L Bruce; Ning Li; Pritish Narayanan; Stefano Ambrogio; Manuel Le Gallo; S R Nandakumar; Andrea Fasoli; Jose Luquin; Alexander Friz; Abu Sebastian; Hsinyu Tsai; Geoffrey W Burr
Journal:  Nat Commun       Date:  2022-06-30       Impact factor: 17.694

Review 6.  Applications and Techniques for Fast Machine Learning in Science.

Authors:  Allison McCarn Deiana; Nhan Tran; Joshua Agar; Michaela Blott; Giuseppe Di Guglielmo; Javier Duarte; Philip Harris; Scott Hauck; Mia Liu; Mark S Neubauer; Jennifer Ngadiuba; Seda Ogrenci-Memik; Maurizio Pierini; Thea Aarrestad; Steffen Bähr; Jürgen Becker; Anne-Sophie Berthold; Richard J Bonventre; Tomás E Müller Bravo; Markus Diefenthaler; Zhen Dong; Nick Fritzsche; Amir Gholami; Ekaterina Govorkova; Dongning Guo; Kyle J Hazelwood; Christian Herwig; Babar Khan; Sehoon Kim; Thomas Klijnsma; Yaling Liu; Kin Ho Lo; Tri Nguyen; Gianantonio Pezzullo; Seyedramin Rasoulinezhad; Ryan A Rivera; Kate Scholberg; Justin Selig; Sougata Sen; Dmitri Strukov; William Tang; Savannah Thais; Kai Lukas Unger; Ricardo Vilalta; Belina von Krosigk; Shen Wang; Thomas K Warburton
Journal:  Front Big Data       Date:  2022-04-12

7.  Synaptic metaplasticity in binarized neural networks.

Authors:  Axel Laborieux; Maxence Ernoult; Tifenn Hirtzlin; Damien Querlioz
Journal:  Nat Commun       Date:  2021-05-05       Impact factor: 14.919

Review 8.  Neuromorphic Devices for Bionic Sensing and Perception.

Authors:  Mingyue Zeng; Yongli He; Chenxi Zhang; Qing Wan
Journal:  Front Neurosci       Date:  2021-06-29       Impact factor: 4.677

9.  Toward Software-Equivalent Accuracy on Transformer-Based Deep Neural Networks With Analog Memory Devices.

Authors:  Katie Spoon; Hsinyu Tsai; An Chen; Malte J Rasch; Stefano Ambrogio; Charles Mackin; Andrea Fasoli; Alexander M Friz; Pritish Narayanan; Milos Stanisavljevic; Geoffrey W Burr
Journal:  Front Comput Neurosci       Date:  2021-07-05       Impact factor: 2.380

10.  On-Chip Training Spiking Neural Networks Using Approximated Backpropagation With Analog Synaptic Devices.

Authors:  Dongseok Kwon; Suhwan Lim; Jong-Ho Bae; Sung-Tae Lee; Hyeongsu Kim; Young-Tak Seo; Seongbin Oh; Jangsaeng Kim; Kyuho Yeom; Byung-Gook Park; Jong-Ho Lee
Journal:  Front Neurosci       Date:  2020-07-07       Impact factor: 4.677

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.