Literature DB >> 33746728

Unsupervised Learning and Clustered Connectivity Enhance Reinforcement Learning in Spiking Neural Networks.

Philipp Weidel1,2, Renato Duarte1, Abigail Morrison1,2.   

Abstract

Reinforcement learning is a paradigm that can account for how organisms learn to adapt their behavior in complex environments with sparse rewards. To partition an environment into discrete states, implementations in spiking neuronal networks typically rely on input architectures involving place cells or receptive fields specified ad hoc by the researcher. This is problematic as a model for how an organism can learn appropriate behavioral sequences in unknown environments, as it fails to account for the unsupervised and self-organized nature of the required representations. Additionally, this approach presupposes knowledge on the part of the researcher on how the environment should be partitioned and represented and scales poorly with the size or complexity of the environment. To address these issues and gain insights into how the brain generates its own task-relevant mappings, we propose a learning architecture that combines unsupervised learning on the input projections with biologically motivated clustered connectivity within the representation layer. This combination allows input features to be mapped to clusters; thus the network self-organizes to produce clearly distinguishable activity patterns that can serve as the basis for reinforcement learning on the output projections. On the basis of the MNIST and Mountain Car tasks, we show that our proposed model performs better than either a comparable unclustered network or a clustered network with static input projections. We conclude that the combination of unsupervised learning and clustered connectivity provides a generic representational substrate suitable for further computation.
Copyright © 2021 Weidel, Duarte and Morrison.

Entities:  

Keywords:  clustered connectivity; neural plasticity; reinforcement learning; spiking neural network; unsupervised learning

Year:  2021        PMID: 33746728      PMCID: PMC7970044          DOI: 10.3389/fncom.2021.543872

Source DB:  PubMed          Journal:  Front Comput Neurosci        ISSN: 1662-5188            Impact factor:   2.380


  2 in total

1.  Exploring Parameter and Hyper-Parameter Spaces of Neuroscience Models on High Performance Computers With Learning to Learn.

Authors:  Alper Yegenoglu; Anand Subramoney; Thorsten Hater; Cristian Jimenez-Romero; Wouter Klijn; Aarón Pérez Martín; Michiel van der Vlag; Michael Herty; Abigail Morrison; Sandra Diaz-Pier
Journal:  Front Comput Neurosci       Date:  2022-05-27       Impact factor: 3.387

2.  Face detection in untrained deep neural networks.

Authors:  Seungdae Baek; Min Song; Jaeson Jang; Gwangsu Kim; Se-Bum Paik
Journal:  Nat Commun       Date:  2021-12-16       Impact factor: 14.919

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.