| Literature DB >> 32477089 |
Frances S Chance1, James B Aimone1, Srideep S Musuvathy1, Michael R Smith2, Craig M Vineyard1, Felix Wang1.
Abstract
Historically, neuroscience principles have heavily influenced artificial intelligence (AI), for example the influence of the perceptron model, essentially a simple model of a biological neuron, on artificial neural networks. More recently, notable recent AI advances, for example the growing popularity of reinforcement learning, often appear more aligned with cognitive neuroscience or psychology, focusing on function at a relatively abstract level. At the same time, neuroscience stands poised to enter a new era of large-scale high-resolution data and appears more focused on underlying neural mechanisms or architectures that can, at times, seem rather removed from functional descriptions. While this might seem to foretell a new generation of AI approaches arising from a deeper exploration of neuroscience specifically for AI, the most direct path for achieving this is unclear. Here we discuss cultural differences between the two fields, including divergent priorities that should be considered when leveraging modern-day neuroscience for AI. For example, the two fields feed two very different applications that at times require potentially conflicting perspectives. We highlight small but significant cultural shifts that we feel would greatly facilitate increased synergy between the two fields.Entities:
Keywords: artificial intelligence; artificial neural network; deep learning; neural-inspired algorithms; neuromorphic
Year: 2020 PMID: 32477089 PMCID: PMC7232604 DOI: 10.3389/fncom.2020.00039
Source DB: PubMed Journal: Front Comput Neurosci ISSN: 1662-5188 Impact factor: 2.380
Figure 1Cultural differences between AI and neuroscience. Example studies from visual processing in AI (blue) and neuroscience (red) are projected onto three different “axes” of impact: answering the question of “what is it?” (form or hardware), answering the question of “how does it work?” (mechanism or representation), and answering the question of “what does it do?” (function or theory). Neuroscience results tend to be communicated answering the “what is it?” or “how does it work” questions. As an example, Hubel and Wiesel's work (red star) characterizing simple and complex cells feeds continuing efforts along the form/hardware axis (horizontal solid red arrow) to further classify characterization of cell types in visual cortex. At the same time, Hubel and Wiesel's hierarchical model of visual processing has had significant impact along the mechanism/representation axis (vertical solid red arrow). Neuroscience experiences a strong application pull along the “what is it” axis, for example to identify therapeutic targets of circuit dysfunction (dashed red arrow). AI research tends to focus on “what does it do?” and “how does it work?” Here, development of Fukushima's neocognitron (blue star) into convolutional networks is illustrated as impact along the mechanism/representation axis (vertical solid blue arrow), while their application to image classification is impact along the function/theory axis (solid blue arrow). The dominant application pull on AI is to produce “human-cognition-like” computations (dashed blue arrow).