Literature DB >> 27584574

A theory of local learning, the learning channel, and the optimality of backpropagation.

Pierre Baldi1, Peter Sadowski2.   

Abstract

In a physical neural system, where storage and processing are intimately intertwined, the rules for adjusting the synaptic weights can only depend on variables that are available locally, such as the activity of the pre- and post-synaptic neurons, resulting in local learning rules. A systematic framework for studying the space of local learning rules is obtained by first specifying the nature of the local variables, and then the functional form that ties them together into each learning rule. Such a framework enables also the systematic discovery of new learning rules and exploration of relationships between learning rules and group symmetries. We study polynomial local learning rules stratified by their degree and analyze their behavior and capabilities in both linear and non-linear units and networks. Stacking local learning rules in deep feedforward networks leads to deep local learning. While deep local learning can learn interesting representations, it cannot learn complex input-output functions, even when targets are available for the top layer. Learning complex input-output functions requires local deep learning where target information is communicated to the deep layers through a backward learning channel. The nature of the communicated information about the targets and the structure of the learning channel partition the space of learning algorithms. For any learning algorithm, the capacity of the learning channel can be defined as the number of bits provided about the error gradient per weight, divided by the number of required operations per weight. We estimate the capacity associated with several learning algorithms and show that backpropagation outperforms them by simultaneously maximizing the information rate and minimizing the computational cost. This result is also shown to be true for recurrent networks, by unfolding them in time. The theory clarifies the concept of Hebbian learning, establishes the power and limitations of local learning rules, introduces the learning channel which enables a formal analysis of the optimality of backpropagation, and explains the sparsity of the space of learning rules discovered so far.
Copyright © 2016 Elsevier Ltd. All rights reserved.

Entities:  

Keywords:  Backpropagation; Deep learning; Hebbian learning; Learning channel; Supervised learning; Unsupervised learning

Mesh:

Year:  2016        PMID: 27584574     DOI: 10.1016/j.neunet.2016.07.006

Source DB:  PubMed          Journal:  Neural Netw        ISSN: 0893-6080


  11 in total

1.  Learning in the Machine: Random Backpropagation and the Deep Learning Channel.

Authors:  Pierre Baldi; Peter Sadowski; Zhiqin Lu
Journal:  Artif Intell       Date:  2018-04-03       Impact factor: 9.088

2.  Can the Brain Do Backpropagation? -Exact Implementation of Backpropagation in Predictive Coding Networks.

Authors:  Yuhang Song; Thomas Lukasiewicz; Zhenghua Xu; Rafal Bogacz
Journal:  Adv Neural Inf Process Syst       Date:  2020

3.  Neural network gradient Hamiltonian Monte Carlo.

Authors:  Lingge Li; Andrew Holbrook; Babak Shahbaba; Pierre Baldi
Journal:  Comput Stat       Date:  2019-01-08       Impact factor: 1.000

4.  Modelling Self-Organization in Complex Networks Via a Brain-Inspired Network Automata Theory Improves Link Reliability in Protein Interactomes.

Authors:  Carlo Vittorio Cannistraci
Journal:  Sci Rep       Date:  2018-10-25       Impact factor: 4.379

5.  Direct Feedback Alignment With Sparse Connections for Local Learning.

Authors:  Brian Crafton; Abhinav Parihar; Evan Gebhardt; Arijit Raychowdhury
Journal:  Front Neurosci       Date:  2019-05-24       Impact factor: 4.677

6.  Pioneering topological methods for network-based drug-target prediction by exploiting a brain-network self-organization theory.

Authors:  Claudio Durán; Simone Daminelli; Josephine M Thomas; V Joachim Haupt; Michael Schroeder; Carlo Vittorio Cannistraci
Journal:  Brief Bioinform       Date:  2018-11-27       Impact factor: 11.622

7.  Deep learning to enable color vision in the dark.

Authors:  Andrew W Browne; Ekaterina Deyneka; Francesco Ceccarelli; Josiah K To; Siwei Chen; Jianing Tang; Anderson N Vu; Pierre F Baldi
Journal:  PLoS One       Date:  2022-04-06       Impact factor: 3.240

8.  Can local-community-paradigm and epitopological learning enhance our understanding of how local brain connectivity is able to process, learn and memorize chronic pain?

Authors:  Vaibhav Narula; Antonio Giuliano Zippo; Alessandro Muscoloni; Gabriele Eliseo M Biella; Carlo Vittorio Cannistraci
Journal:  Appl Netw Sci       Date:  2017-08-30

Review 9.  Data and Power Efficient Intelligence with Neuromorphic Learning Machines.

Authors:  Emre O Neftci
Journal:  iScience       Date:  2018-07-03

Review 10.  Theories of Error Back-Propagation in the Brain.

Authors:  James C R Whittington; Rafal Bogacz
Journal:  Trends Cogn Sci       Date:  2019-01-28       Impact factor: 20.229

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.