Literature DB >> 31557461

Engineering a Less Artificial Intelligence.

Fabian H Sinz1, Xaq Pitkow2, Jacob Reimer3, Matthias Bethge4, Andreas S Tolias5.   

Abstract

Despite enormous progress in machine learning, artificial neural networks still lag behind brains in their ability to generalize to new situations. Given identical training data, differences in generalization are caused by many defining features of a learning algorithm, such as network architecture and learning rule. Their joint effect, called "inductive bias," determines how well any learning algorithm-or brain-generalizes: robust generalization needs good inductive biases. Artificial networks use rather nonspecific biases and often latch onto patterns that are only informative about the statistics of the training data but may not generalize to different scenarios. Brains, on the other hand, generalize across comparatively drastic changes in the sensory input all the time. We highlight some shortcomings of state-of-the-art learning algorithms compared to biological brains and discuss several ideas about how neuroscience can guide the quest for better inductive biases by providing useful constraints on representations and network architecture.
Copyright © 2019 Elsevier Inc. All rights reserved.

Keywords:  artificial intelligence; generalization; inductive bias; machine learning; neuroscience; robustness; sensory systems

Mesh:

Year:  2019        PMID: 31557461     DOI: 10.1016/j.neuron.2019.08.034

Source DB:  PubMed          Journal:  Neuron        ISSN: 0896-6273            Impact factor:   17.173


  23 in total

1.  Modeling nucleus accumbens : A Computational Model from Single Cell to Circuit Level.

Authors:  Rahmi Elibol; Neslihan Serap Şengör
Journal:  J Comput Neurosci       Date:  2020-11-09       Impact factor: 1.621

Review 2.  If deep learning is the answer, what is the question?

Authors:  Andrew Saxe; Stephanie Nelli; Christopher Summerfield
Journal:  Nat Rev Neurosci       Date:  2020-11-16       Impact factor: 34.870

3.  Rational thoughts in neural codes.

Authors:  Zhengwei Wu; Minhae Kwon; Saurabh Daptardar; Paul Schrater; Xaq Pitkow
Journal:  Proc Natl Acad Sci U S A       Date:  2020-11-24       Impact factor: 11.205

Review 4.  Illuminating dendritic function with computational models.

Authors:  Panayiota Poirazi; Athanasia Papoutsi
Journal:  Nat Rev Neurosci       Date:  2020-05-11       Impact factor: 34.870

Review 5.  Discovering the Computational Relevance of Brain Network Organization.

Authors:  Takuya Ito; Luke Hearne; Ravi Mill; Carrisa Cocuzza; Michael W Cole
Journal:  Trends Cogn Sci       Date:  2019-11-11       Impact factor: 20.229

Review 6.  How learning unfolds in the brain: toward an optimization view.

Authors:  Jay A Hennig; Emily R Oby; Darby M Losey; Aaron P Batista; Byron M Yu; Steven M Chase
Journal:  Neuron       Date:  2021-10-13       Impact factor: 17.173

7.  Working Memory: From Neural Activity to the Sentient Mind.

Authors:  Russell J Jaffe; Christos Constantinidis
Journal:  Compr Physiol       Date:  2021-09-23       Impact factor: 8.915

8.  A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence.

Authors:  Emily J Allen; Ghislain St-Yves; Yihan Wu; Jesse L Breedlove; Jacob S Prince; Logan T Dowdle; Matthias Nau; Brad Caron; Franco Pestilli; Ian Charest; J Benjamin Hutchinson; Thomas Naselaris; Kendrick Kay
Journal:  Nat Neurosci       Date:  2021-12-16       Impact factor: 28.771

Review 9.  Crossing the Cleft: Communication Challenges Between Neuroscience and Artificial Intelligence.

Authors:  Frances S Chance; James B Aimone; Srideep S Musuvathy; Michael R Smith; Craig M Vineyard; Felix Wang
Journal:  Front Comput Neurosci       Date:  2020-05-06       Impact factor: 2.380

10.  Cortical response to naturalistic stimuli is largely predictable with deep neural networks.

Authors:  Meenakshi Khosla; Gia H Ngo; Keith Jamison; Amy Kuceyeski; Mert R Sabuncu
Journal:  Sci Adv       Date:  2021-05-28       Impact factor: 14.136

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.