Literature DB >> 31992643

The unreasonable effectiveness of deep learning in artificial intelligence.

Terrence J Sejnowski1,2.   

Abstract

Deep learning networks have been trained to recognize speech, caption photographs, and translate text between languages at high levels of performance. Although applications of deep learning networks to real-world problems have become ubiquitous, our understanding of why they are so effective is lacking. These empirical results should not be possible according to sample complexity in statistics and nonconvex optimization theory. However, paradoxes in the training and effectiveness of deep learning networks are being investigated and insights are being found in the geometry of high-dimensional spaces. A mathematical theory of deep learning would illuminate how they function, allow us to assess the strengths and weaknesses of different network architectures, and lead to major improvements. Deep learning has provided natural ways for humans to communicate with digital devices and is foundational for building artificial general intelligence. Deep learning was inspired by the architecture of the cerebral cortex and insights into autonomy and general intelligence may be found in other brain regions that are essential for planning and survival, but major breakthroughs will be needed to achieve these goals.

Entities:  

Keywords:  artificial intelligence; deep learning; neural networks

Year:  2020        PMID: 31992643      PMCID: PMC7720171          DOI: 10.1073/pnas.1907373117

Source DB:  PubMed          Journal:  Proc Natl Acad Sci U S A        ISSN: 0027-8424            Impact factor:   11.205


  14 in total

Review 1.  The book of Hebb.

Authors:  T J Sejnowski
Journal:  Neuron       Date:  1999-12       Impact factor: 17.173

2.  A fast learning algorithm for deep belief nets.

Authors:  Geoffrey E Hinton; Simon Osindero; Yee-Whye Teh
Journal:  Neural Comput       Date:  2006-07       Impact factor: 2.026

3.  A logical calculus of the ideas immanent in nervous activity. 1943.

Authors:  W S McCulloch; W Pitts
Journal:  Bull Math Biol       Date:  1990       Impact factor: 1.758

4.  Scaling Principles of Distributed Circuits.

Authors:  Shyam Srinivasan; Charles F Stevens
Journal:  Curr Biol       Date:  2019-07-18       Impact factor: 10.834

5.  A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play.

Authors:  David Silver; Thomas Hubert; Julian Schrittwieser; Ioannis Antonoglou; Matthew Lai; Arthur Guez; Marc Lanctot; Laurent Sifre; Dharshan Kumaran; Thore Graepel; Timothy Lillicrap; Karen Simonyan; Demis Hassabis
Journal:  Science       Date:  2018-12-07       Impact factor: 47.728

6.  Isolated cortical computations during delta waves support memory consolidation.

Authors:  Ralitsa Todorova; Michaël Zugaro
Journal:  Science       Date:  2019-10-18       Impact factor: 47.728

7.  A universal scaling law between gray matter and white matter of cerebral cortex.

Authors:  K Zhang; T J Sejnowski
Journal:  Proc Natl Acad Sci U S A       Date:  2000-05-09       Impact factor: 11.205

Review 8.  Neuromodulation of neuronal circuits: back to the future.

Authors:  Eve Marder
Journal:  Neuron       Date:  2012-10-04       Impact factor: 17.173

Review 9.  Algorithms in nature: the convergence of systems biology and computational thinking.

Authors:  Saket Navlakha; Ziv Bar-Joseph
Journal:  Mol Syst Biol       Date:  2011-11-08       Impact factor: 11.429

10.  Rotating waves during human sleep spindles organize global patterns of activity that repeat precisely through the night.

Authors:  Lyle Muller; Giovanni Piantoni; Dominik Koller; Sydney S Cash; Eric Halgren; Terrence J Sejnowski
Journal:  Elife       Date:  2016-11-15       Impact factor: 8.140

View more
  24 in total

Review 1.  Spine dynamics in the brain, mental disorders and artificial neural networks.

Authors:  Haruo Kasai; Noam E Ziv; Hitoshi Okazaki; Sho Yagishita; Taro Toyoizumi
Journal:  Nat Rev Neurosci       Date:  2021-05-28       Impact factor: 34.870

2.  Diversity-enabled sweet spots in layered architectures and speed-accuracy trade-offs in sensorimotor control.

Authors:  Yorie Nakahira; Quanying Liu; Terrence J Sejnowski; John C Doyle
Journal:  Proc Natl Acad Sci U S A       Date:  2021-06-01       Impact factor: 11.205

3.  The science of deep learning.

Authors:  Richard Baraniuk; David Donoho; Matan Gavish
Journal:  Proc Natl Acad Sci U S A       Date:  2020-11-23       Impact factor: 11.205

4.  Replication across space and time must be weak in the social and environmental sciences.

Authors:  Michael F Goodchild; Wenwen Li
Journal:  Proc Natl Acad Sci U S A       Date:  2021-08-31       Impact factor: 11.205

Review 5.  On the road to explainable AI in drug-drug interactions prediction: A systematic review.

Authors:  Thanh Hoa Vo; Ngan Thi Kim Nguyen; Quang Hien Kha; Nguyen Quoc Khanh Le
Journal:  Comput Struct Biotechnol J       Date:  2022-04-19       Impact factor: 6.155

6.  Automatic Scan Range Delimitation in Chest CT Using Deep Learning.

Authors:  Aydin Demircioğlu; Moon-Sung Kim; Magdalena Charis Stein; Nika Guberina; Lale Umutlu; Kai Nassenstein
Journal:  Radiol Artif Intell       Date:  2021-02-10

7.  Identifying Habitat Elements from Bird Images Using Deep Convolutional Neural Networks.

Authors:  Zhaojun Wang; Jiangning Wang; Congtian Lin; Yan Han; Zhaosheng Wang; Liqiang Ji
Journal:  Animals (Basel)       Date:  2021-04-27       Impact factor: 2.752

Review 8.  Emerging machine learning approaches to phenotyping cellular motility and morphodynamics.

Authors:  Hee June Choi; Chuangqi Wang; Xiang Pan; Junbong Jang; Mengzhi Cao; Joseph A Brazzo; Yongho Bae; Kwonmoo Lee
Journal:  Phys Biol       Date:  2021-06-17       Impact factor: 2.959

9.  Physics-informed attention-based neural network for hyperbolic partial differential equations: application to the Buckley-Leverett problem.

Authors:  Ruben Rodriguez-Torrado; Pablo Ruiz; Luis Cueto-Felgueroso; Michael Cerny Green; Tyler Friesen; Sebastien Matringe; Julian Togelius
Journal:  Sci Rep       Date:  2022-05-09       Impact factor: 4.996

10.  BENDR: Using Transformers and a Contrastive Self-Supervised Learning Task to Learn From Massive Amounts of EEG Data.

Authors:  Demetres Kostas; Stéphane Aroca-Ouellette; Frank Rudzicz
Journal:  Front Hum Neurosci       Date:  2021-06-23       Impact factor: 3.169

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.