Literature DB >> 33679315

Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing Its Gradient Estimator Bias.

Axel Laborieux1, Maxence Ernoult1,2,3, Benjamin Scellier3, Yoshua Bengio3,4, Julie Grollier2, Damien Querlioz1.   

Abstract

Equilibrium Propagation is a biologically-inspired algorithm that trains convergent recurrent neural networks with a local learning rule. This approach constitutes a major lead to allow learning-capable neuromophic systems and comes with strong theoretical guarantees. Equilibrium propagation operates in two phases, during which the network is let to evolve freely and then "nudged" toward a target; the weights of the network are then updated based solely on the states of the neurons that they connect. The weight updates of Equilibrium Propagation have been shown mathematically to approach those provided by Backpropagation Through Time (BPTT), the mainstream approach to train recurrent neural networks, when nudging is performed with infinitely small strength. In practice, however, the standard implementation of Equilibrium Propagation does not scale to visual tasks harder than MNIST. In this work, we show that a bias in the gradient estimate of equilibrium propagation, inherent in the use of finite nudging, is responsible for this phenomenon and that canceling it allows training deep convolutional neural networks. We show that this bias can be greatly reduced by using symmetric nudging (a positive nudging and a negative one). We also generalize Equilibrium Propagation to the case of cross-entropy loss (by opposition to squared error). As a result of these advances, we are able to achieve a test error of 11.7% on CIFAR-10, which approaches the one achieved by BPTT and provides a major improvement with respect to the standard Equilibrium Propagation that gives 86% test error. We also apply these techniques to train an architecture with unidirectional forward and backward connections, yielding a 13.2% test error. These results highlight equilibrium propagation as a compelling biologically-plausible approach to compute error gradients in deep neuromorphic systems.
Copyright © 2021 Laborieux, Ernoult, Scellier, Bengio, Grollier and Querlioz.

Entities:  

Keywords:  biologically plausible deep learning; deep convolutional neural network; energy based models; equilibrium propagation; learning algorithms; neuromorphic computing; on-chip learning

Year:  2021        PMID: 33679315      PMCID: PMC7930909          DOI: 10.3389/fnins.2021.633674

Source DB:  PubMed          Journal:  Front Neurosci        ISSN: 1662-453X            Impact factor:   4.677


  3 in total

1.  Neurons learn by predicting future activity.

Authors:  Artur Luczak; Bruce L McNaughton; Yoshimasa Kubo
Journal:  Nat Mach Intell       Date:  2022-01-25

2.  Cell-type-specific neuromodulation guides synaptic credit assignment in a spiking neural network.

Authors:  Yuhan Helena Liu; Stephen Smith; Stefan Mihalas; Eric Shea-Brown; Uygar Sümbül
Journal:  Proc Natl Acad Sci U S A       Date:  2021-12-21       Impact factor: 11.205

3.  Combining backpropagation with Equilibrium Propagation to improve an Actor-Critic reinforcement learning framework.

Authors:  Yoshimasa Kubo; Eric Chalmers; Artur Luczak
Journal:  Front Comput Neurosci       Date:  2022-08-23       Impact factor: 3.387

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.