Literature DB >> 32782422

Universality and individuality in neural dynamics across large populations of recurrent networks.

Niru Maheswaranathan1, Alex H Williams2, Matthew D Golub2, Surya Ganguli3, David Sussillo1.   

Abstract

Task-based modeling with recurrent neural networks (RNNs) has emerged as a popular way to infer the computational function of different brain regions. These models are quantitatively assessed by comparing the low-dimensional neural representations of the model with the brain, for example using canonical correlation analysis (CCA). However, the nature of the detailed neurobiological inferences one can draw from such efforts remains elusive. For example, to what extent does training neural networks to solve common tasks uniquely determine the network dynamics, independent of modeling architectural choices? Or alternatively, are the learned dynamics highly sensitive to different model choices? Knowing the answer to these questions has strong implications for whether and how we should use task-based RNN modeling to understand brain dynamics. To address these foundational questions, we study populations of thousands of networks, with commonly used RNN architectures, trained to solve neuroscientifically motivated tasks and characterize their nonlinear dynamics. We find the geometry of the RNN representations can be highly sensitive to different network architectures, yielding a cautionary tale for measures of similarity that rely on representational geometry, such as CCA. Moreover, we find that while the geometry of neural dynamics can vary greatly across architectures, the underlying computational scaffold-the topological structure of fixed points, transitions between them, limit cycles, and linearized dynamics-often appears universal across all architectures.

Entities:  

Year:  2019        PMID: 32782422      PMCID: PMC7416639     

Source DB:  PubMed          Journal:  Adv Neural Inf Process Syst        ISSN: 1049-5258


  31 in total

1.  Long short-term memory.

Authors:  S Hochreiter; J Schmidhuber
Journal:  Neural Comput       Date:  1997-11-15       Impact factor: 2.026

2.  A mathematical theory of semantic development in deep neural networks.

Authors:  Andrew M Saxe; James L McClelland; Surya Ganguli
Journal:  Proc Natl Acad Sci U S A       Date:  2019-05-17       Impact factor: 11.205

Review 3.  Recurrent neural networks as versatile tools of neuroscience research.

Authors:  Omri Barak
Journal:  Curr Opin Neurobiol       Date:  2017-06-29       Impact factor: 6.627

4.  A diverse range of factors affect the nature of neural representations underlying short-term memory.

Authors:  A Emin Orhan; Wei Ji Ma
Journal:  Nat Neurosci       Date:  2019-01-24       Impact factor: 24.884

Review 5.  Neural circuits as computational dynamical systems.

Authors:  David Sussillo
Journal:  Curr Opin Neurobiol       Date:  2014-02-05       Impact factor: 6.627

6.  Neural networks and physical systems with emergent collective computational abilities.

Authors:  J J Hopfield
Journal:  Proc Natl Acad Sci U S A       Date:  1982-04       Impact factor: 11.205

7.  A Task-Optimized Neural Network Replicates Human Auditory Behavior, Predicts Brain Responses, and Reveals a Cortical Processing Hierarchy.

Authors:  Alexander J E Kell; Daniel L K Yamins; Erica N Shook; Sam V Norman-Haignere; Josh H McDermott
Journal:  Neuron       Date:  2018-04-19       Impact factor: 17.173

8.  Deep supervised, but not unsupervised, models may explain IT cortical representation.

Authors:  Seyed-Mahdi Khaligh-Razavi; Nikolaus Kriegeskorte
Journal:  PLoS Comput Biol       Date:  2014-11-06       Impact factor: 4.475

9.  Flexible timing by temporal scaling of cortical responses.

Authors:  Jing Wang; Devika Narain; Eghbal A Hosseini; Mehrdad Jazayeri
Journal:  Nat Neurosci       Date:  2017-12-04       Impact factor: 24.884

10.  Gated Recurrent Units Viewed Through the Lens of Continuous Time Dynamical Systems.

Authors:  Ian D Jordan; Piotr Aleksander Sokół; Il Memming Park
Journal:  Front Comput Neurosci       Date:  2021-07-22       Impact factor: 2.380

View more
  11 in total

1.  A goal-driven modular neural network predicts parietofrontal neural dynamics during grasping.

Authors:  Jonathan A Michaels; Stefan Schaffelhofer; Andres Agudelo-Toro; Hansjörg Scherberger
Journal:  Proc Natl Acad Sci U S A       Date:  2020-11-30       Impact factor: 11.205

2.  Motor cortex activity across movement speeds is predicted by network-level strategies for generating muscle activity.

Authors:  Shreya Saxena; Abigail A Russo; John Cunningham; Mark M Churchland
Journal:  Elife       Date:  2022-05-27       Impact factor: 8.713

3.  The role of population structure in computations through neural dynamics.

Authors:  Alexis Dubreuil; Adrian Valente; Manuel Beiran; Francesca Mastrogiuseppe; Srdjan Ostojic
Journal:  Nat Neurosci       Date:  2022-06-06       Impact factor: 28.771

4.  Dynamics and Information Import in Recurrent Neural Networks.

Authors:  Claus Metzner; Patrick Krauss
Journal:  Front Comput Neurosci       Date:  2022-04-27       Impact factor: 3.387

5.  From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction.

Authors:  Hidenori Tanaka; Aran Nayebi; Niru Maheswaranathan; Lane McIntosh; Stephen A Baccus; Surya Ganguli
Journal:  Adv Neural Inf Process Syst       Date:  2019-12

6.  Neural Trajectories in the Supplementary Motor Area and Motor Cortex Exhibit Distinct Geometries, Compatible with Different Classes of Computation.

Authors:  Abigail A Russo; Ramin Khajeh; Sean R Bittner; Sean M Perkins; John P Cunningham; L F Abbott; Mark M Churchland
Journal:  Neuron       Date:  2020-06-08       Impact factor: 18.688

7.  Individual differences among deep neural network models.

Authors:  Johannes Mehrer; Courtney J Spoerer; Nikolaus Kriegeskorte; Tim C Kietzmann
Journal:  Nat Commun       Date:  2020-11-12       Impact factor: 14.919

8.  Different eigenvalue distributions encode the same temporal tasks in recurrent neural networks.

Authors:  Cecilia Jarne
Journal:  Cogn Neurodyn       Date:  2022-04-20       Impact factor: 3.473

9.  Recurrent neural networks with explicit representation of dynamic latent variables can mimic behavioral patterns in a physical inference task.

Authors:  Rishi Rajalingham; Aída Piccato; Mehrdad Jazayeri
Journal:  Nat Commun       Date:  2022-10-04       Impact factor: 17.694

10.  Slow manifolds within network dynamics encode working memory efficiently and robustly.

Authors:  Elham Ghazizadeh; ShiNung Ching
Journal:  PLoS Comput Biol       Date:  2021-09-15       Impact factor: 4.475

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.