Literature DB >> 33328247

PsychRNN: An Accessible and Flexible Python Package for Training Recurrent Neural Network Models on Cognitive Tasks.

Daniel B Ehrlich1, Jasmine T Stone2, David Brandfonbrener2,3, Alexander Atanasov4,5, John D Murray6,4,7.   

Abstract

Task-trained artificial recurrent neural networks (RNNs) provide a computational modeling framework of increasing interest and application in computational, systems, and cognitive neuroscience. RNNs can be trained, using deep-learning methods, to perform cognitive tasks used in animal and human experiments and can be studied to investigate potential neural representations and circuit mechanisms underlying cognitive computations and behavior. Widespread application of these approaches within neuroscience has been limited by technical barriers in use of deep-learning software packages to train network models. Here, we introduce PsychRNN, an accessible, flexible, and extensible Python package for training RNNs on cognitive tasks. Our package is designed for accessibility, for researchers to define tasks and train RNN models using only Python and NumPy, without requiring knowledge of deep-learning software. The training backend is based on TensorFlow and is readily extensible for researchers with TensorFlow knowledge to develop projects with additional customization. PsychRNN implements a number of specialized features to support applications in systems and cognitive neuroscience. Users can impose neurobiologically relevant constraints on synaptic connectivity patterns. Furthermore, specification of cognitive tasks has a modular structure, which facilitates parametric variation of task demands to examine their impact on model solutions. PsychRNN also enables task shaping during training, or curriculum learning, in which tasks are adjusted in closed-loop based on performance. Shaping is ubiquitous in training of animals in cognitive tasks, and PsychRNN allows investigation of how shaping trajectories impact learning and model solutions. Overall, the PsychRNN framework facilitates application of trained RNNs in neuroscience research.
Copyright © 2021 Ehrlich et al.

Entities:  

Keywords:  cognitive task; computational model; deep learning; recurrent neural network; training

Year:  2021        PMID: 33328247      PMCID: PMC7814477          DOI: 10.1523/ENEURO.0427-20.2020

Source DB:  PubMed          Journal:  eNeuro        ISSN: 2373-2822


  26 in total

1.  Experience-dependent representation of visual categories in parietal cortex.

Authors:  David J Freedman; John A Assad
Journal:  Nature       Date:  2006-08-27       Impact factor: 49.962

2.  Long short-term memory.

Authors:  S Hochreiter; J Schmidhuber
Journal:  Neural Comput       Date:  1997-11-15       Impact factor: 2.026

3.  Dynamic Control of Response Criterion in Premotor Cortex during Perceptual Detection under Temporal Uncertainty.

Authors:  Federico Carnevale; Victor de Lafuente; Ranulfo Romo; Omri Barak; Néstor Parga
Journal:  Neuron       Date:  2015-05-07       Impact factor: 17.173

4.  A diverse range of factors affect the nature of neural representations underlying short-term memory.

Authors:  A Emin Orhan; Wei Ji Ma
Journal:  Nat Neurosci       Date:  2019-01-24       Impact factor: 24.884

Review 5.  Neural circuits as computational dynamical systems.

Authors:  David Sussillo
Journal:  Curr Opin Neurobiol       Date:  2014-02-05       Impact factor: 6.627

Review 6.  Using goal-driven deep learning models to understand sensory cortex.

Authors:  Daniel L K Yamins; James J DiCarlo
Journal:  Nat Neurosci       Date:  2016-03       Impact factor: 24.884

7.  Standardized automated training of rhesus monkeys for neuroscience research in their housing environment.

Authors:  M Berger; A Calapai; V Stephan; M Niessing; L Burchardt; A Gail; S Treue
Journal:  J Neurophysiol       Date:  2017-11-15       Impact factor: 2.714

8.  A neural network that finds a naturalistic solution for the production of muscle activity.

Authors:  David Sussillo; Mark M Churchland; Matthew T Kaufman; Krishna V Shenoy
Journal:  Nat Neurosci       Date:  2015-06-15       Impact factor: 24.884

9.  Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing.

Authors:  Nikolaus Kriegeskorte
Journal:  Annu Rev Vis Sci       Date:  2015-11-24       Impact factor: 6.422

10.  Thalamic regulation of switching between cortical representations enables cognitive flexibility.

Authors:  Rajeev V Rikhye; Aditya Gilra; Michael M Halassa
Journal:  Nat Neurosci       Date:  2018-11-19       Impact factor: 24.884

View more
  3 in total

1.  Dynamic task-belief is an integral part of decision-making.

Authors:  Cheng Xue; Lily E Kramer; Marlene R Cohen
Journal:  Neuron       Date:  2022-06-13       Impact factor: 18.688

2.  Geometry of neural computation unifies working memory and planning.

Authors:  Daniel B Ehrlich; John D Murray
Journal:  Proc Natl Acad Sci U S A       Date:  2022-09-06       Impact factor: 12.779

3.  Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning.

Authors:  Haroon Anwar; Simon Caby; Salvador Dura-Bernal; David D'Onofrio; Daniel Hasegan; Matt Deible; Sara Grunblatt; George L Chadderdon; Cliff C Kerr; Peter Lakatos; William W Lytton; Hananel Hazan; Samuel A Neymotin
Journal:  PLoS One       Date:  2022-05-11       Impact factor: 3.752

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.