Literature DB >> 30366739

The unreasonable effectiveness of small neural ensembles in high-dimensional brain.

Alexander N Gorban1, Valeri A Makarov2, Ivan Y Tyukin3.   

Abstract

Complexity is an indisputable, well-known, and broadly accepted feature of the brain. Despite the apparently obvious and widely-spread consensus on the brain complexity, sprouts of the single neuron revolution emerged in neuroscience in the 1970s. They brought many unexpected discoveries, including grandmother or concept cells and sparse coding of information in the brain. In machine learning for a long time, the famous curse of dimensionality seemed to be an unsolvable problem. Nevertheless, the idea of the blessing of dimensionality becomes gradually more and more popular. Ensembles of non-interacting or weakly interacting simple units prove to be an effective tool for solving essentially multidimensional and apparently incomprehensible problems. This approach is especially useful for one-shot (non-iterative) correction of errors in large legacy artificial intelligence systems and when the complete re-training is impossible or too expensive. These simplicity revolutions in the era of complexity have deep fundamental reasons grounded in geometry of multidimensional data spaces. To explore and understand these reasons we revisit the background ideas of statistical physics. In the course of the 20th century they were developed into the concentration of measure theory. The Gibbs equivalence of ensembles with further generalizations shows that the data in high-dimensional spaces are concentrated near shells of smaller dimension. New stochastic separation theorems reveal the fine structure of the data clouds. We review and analyse biological, physical, and mathematical problems at the core of the fundamental question: how can high-dimensional brain organise reliable and fast learning in high-dimensional world of data by simple tools? To meet this challenge, we outline and setup a framework based on statistical physics of data. Two critical applications are reviewed to exemplify the approach: one-shot correction of errors in intellectual systems and emergence of static and associative memories in ensembles of single neurons. Error correctors should be simple; not damage the existing skills of the system; allow fast non-iterative learning and correction of new mistakes without destroying the previous fixes. All these demands can be satisfied by new tools based on the concentration of measure phenomena and stochastic separation theory. We show how a simple enough functional neuronal model is capable of explaining: i) the extreme selectivity of single neurons to the information content of high-dimensional data, ii) simultaneous separation of several uncorrelated informational items from a large set of stimuli, and iii) dynamic learning of new items by associating them with already "known" ones. These results constitute a basis for organisation of complex memories in ensembles of single neurons.
Copyright © 2018 The Author(s). Published by Elsevier B.V. All rights reserved.

Keywords:  Big data; Blessing of dimensionality; Error correction; Linear discriminant; Measure concentration; Non-iterative learning

Mesh:

Year:  2018        PMID: 30366739     DOI: 10.1016/j.plrev.2018.09.005

Source DB:  PubMed          Journal:  Phys Life Rev        ISSN: 1571-0645            Impact factor:   11.025


  8 in total

1.  Fractional Norms and Quasinorms Do Not Help to Overcome the Curse of Dimensionality.

Authors:  Evgeny M Mirkes; Jeza Allohibi; Alexander Gorban
Journal:  Entropy (Basel)       Date:  2020-09-30       Impact factor: 2.524

Review 2.  Toward Reflective Spiking Neural Networks Exploiting Memristive Devices.

Authors:  Valeri A Makarov; Sergey A Lobov; Sergey Shchanikov; Alexey Mikhaylov; Viktor B Kazantsev
Journal:  Front Comput Neurosci       Date:  2022-06-16       Impact factor: 3.387

3.  Bio-Inspired Autonomous Learning Algorithm With Application to Mobile Robot Obstacle Avoidance.

Authors:  Junxiu Liu; Yifan Hua; Rixing Yang; Yuling Luo; Hao Lu; Yanhu Wang; Su Yang; Xuemei Ding
Journal:  Front Neurosci       Date:  2022-06-30       Impact factor: 5.152

Review 4.  High-Dimensional Brain in a High-Dimensional World: Blessing of Dimensionality.

Authors:  Alexander N Gorban; Valery A Makarov; Ivan Y Tyukin
Journal:  Entropy (Basel)       Date:  2020-01-09       Impact factor: 2.524

Review 5.  Limit Theorems as Blessing of Dimensionality: Neural-Oriented Overview.

Authors:  Vladik Kreinovich; Olga Kosheleva
Journal:  Entropy (Basel)       Date:  2021-04-22       Impact factor: 2.524

6.  Situational Understanding in the Human and the Machine.

Authors:  Yan Yufik; Raj Malhotra
Journal:  Front Syst Neurosci       Date:  2021-12-23

7.  Universal principles justify the existence of concept cells.

Authors:  Carlos Calvo Tapia; Ivan Tyukin; Valeri A Makarov
Journal:  Sci Rep       Date:  2020-05-12       Impact factor: 4.379

8.  Competitive Learning in a Spiking Neural Network: Towards an Intelligent Pattern Classifier.

Authors:  Sergey A Lobov; Andrey V Chernyshov; Nadia P Krilova; Maxim O Shamshin; Victor B Kazantsev
Journal:  Sensors (Basel)       Date:  2020-01-16       Impact factor: 3.576

  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.