Literature DB >> 29731511

Learning in the Machine: Random Backpropagation and the Deep Learning Channel.

Pierre Baldi1, Peter Sadowski1, Zhiqin Lu2.   

Abstract

Random backpropagation (RBP) is a variant of the backpropagation algorithm for training neural networks, where the transpose of the forward matrices are replaced by fixed random matrices in the calculation of the weight updates. It is remarkable both because of its effectiveness, in spite of using random matrices to communicate error information, and because it completely removes the taxing requirement of maintaining symmetric weights in a physical neural system. To better understand random backpropagation, we first connect it to the notions of local learning and learning channels. Through this connection, we derive several alternatives to RBP, including skipped RBP (SRPB), adaptive RBP (ARBP), sparse RBP, and their combinations (e.g. ASRBP) and analyze their computational complexity. We then study their behavior through simulations using the MNIST and CIFAR-10 bechnmark datasets. These simulations show that most of these variants work robustly, almost as well as backpropagation, and that multiplication by the derivatives of the activation functions is important. As a follow-up, we study also the low-end of the number of bits required to communicate error information over the learning channel. We then provide partial intuitive explanations for some of the remarkable properties of RBP and its variations. Finally, we prove several mathematical results, including the convergence to fixed points of linear chains of arbitrary length, the convergence to fixed points of linear autoencoders with decorrelated data, the long-term existence of solutions for linear systems with a single hidden layer and convergence in special cases, and the convergence to fixed points of non-linear chains, when the derivative of the activation functions is included.

Entities:  

Year:  2018        PMID: 29731511      PMCID: PMC5931406          DOI: 10.1016/j.artint.2018.03.003

Source DB:  PubMed          Journal:  Artif Intell        ISSN: 0004-3702            Impact factor:   9.088


  10 in total

1.  Receptive fields, binocular interaction and functional architecture in the cat's visual cortex.

Authors:  D H HUBEL; T N WIESEL
Journal:  J Physiol       Date:  1962-01       Impact factor: 5.182

2.  Deep architectures for protein contact map prediction.

Authors:  Pietro Di Lena; Ken Nagata; Pierre Baldi
Journal:  Bioinformatics       Date:  2012-07-30       Impact factor: 6.937

3.  Complex-valued autoencoders.

Authors:  Pierre Baldi; Zhiqin Lu
Journal:  Neural Netw       Date:  2012-05-04

4.  Searching for exotic particles in high-energy physics with deep learning.

Authors:  P Baldi; P Sadowski; D Whiteson
Journal:  Nat Commun       Date:  2014-07-02       Impact factor: 14.919

5.  A theory of local learning, the learning channel, and the optimality of backpropagation.

Authors:  Pierre Baldi; Peter Sadowski
Journal:  Neural Netw       Date:  2016-08-05

6.  Learning in the machine: The symmetries of the deep learning channel.

Authors:  Pierre Baldi; Peter Sadowski; Zhiqin Lu
Journal:  Neural Netw       Date:  2017-09-05

7.  Neocognitron: a self organizing neural network model for a mechanism of pattern recognition unaffected by shift in position.

Authors:  K Fukushima
Journal:  Biol Cybern       Date:  1980       Impact factor: 2.086

8.  The Dropout Learning Algorithm.

Authors:  Pierre Baldi; Peter Sadowski
Journal:  Artif Intell       Date:  2014-05       Impact factor: 9.088

9.  Predicting effects of noncoding variants with deep learning-based sequence model.

Authors:  Jian Zhou; Olga G Troyanskaya
Journal:  Nat Methods       Date:  2015-08-24       Impact factor: 28.547

10.  What time is it? Deep learning approaches for circadian rhythms.

Authors:  Forest Agostinelli; Nicholas Ceglia; Babak Shahbaba; Paolo Sassone-Corsi; Pierre Baldi
Journal:  Bioinformatics       Date:  2016-06-15       Impact factor: 6.937

  10 in total
  9 in total

1.  Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.

Authors:  Emre O Neftci; Charles Augustine; Somnath Paul; Georgios Detorakis
Journal:  Front Neurosci       Date:  2017-06-21       Impact factor: 4.677

2.  Neural and Synaptic Array Transceiver: A Brain-Inspired Computing Framework for Embedded Learning.

Authors:  Georgios Detorakis; Sadique Sheik; Charles Augustine; Somnath Paul; Bruno U Pedroni; Nikil Dutt; Jeffrey Krichmar; Gert Cauwenberghs; Emre Neftci
Journal:  Front Neurosci       Date:  2018-08-29       Impact factor: 4.677

3.  SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.

Authors:  Friedemann Zenke; Surya Ganguli
Journal:  Neural Comput       Date:  2018-04-13       Impact factor: 2.026

4.  Deep Supervised Learning Using Local Errors.

Authors:  Hesham Mostafa; Vishwajith Ramesh; Gert Cauwenberghs
Journal:  Front Neurosci       Date:  2018-08-31       Impact factor: 4.677

5.  Multi-Timescale Memory Dynamics Extend Task Repertoire in a Reinforcement Learning Network With Attention-Gated Memory.

Authors:  Marco Martinolli; Wulfram Gerstner; Aditya Gilra
Journal:  Front Comput Neurosci       Date:  2018-07-12       Impact factor: 2.380

6.  Direct Feedback Alignment With Sparse Connections for Local Learning.

Authors:  Brian Crafton; Abhinav Parihar; Evan Gebhardt; Arijit Raychowdhury
Journal:  Front Neurosci       Date:  2019-05-24       Impact factor: 4.677

7.  Learning Without Feedback: Fixed Random Learning Signals Allow for Feedforward Training of Deep Neural Networks.

Authors:  Charlotte Frenkel; Martin Lefebvre; David Bol
Journal:  Front Neurosci       Date:  2021-02-10       Impact factor: 4.677

8.  The neural coding framework for learning generative models.

Authors:  Alexander Ororbia; Daniel Kifer
Journal:  Nat Commun       Date:  2022-04-19       Impact factor: 17.694

Review 9.  Data and Power Efficient Intelligence with Neuromorphic Learning Machines.

Authors:  Emre O Neftci
Journal:  iScience       Date:  2018-07-03
  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.