Literature DB >> 33754755

On explaining the surprising success of reservoir computing forecaster of chaos? The universal machine learning dynamical system with contrast to VAR and DMD.

Erik Bollt1.   

Abstract

Machine learning has become a widely popular and successful paradigm, especially in data-driven science and engineering. A major application problem is data-driven forecasting of future states from a complex dynamical system. Artificial neural networks have evolved as a clear leader among many machine learning approaches, and recurrent neural networks are considered to be particularly well suited for forecasting dynamical systems. In this setting, the echo-state networks or reservoir computers (RCs) have emerged for their simplicity and computational complexity advantages. Instead of a fully trained network, an RC trains only readout weights by a simple, efficient least squares method. What is perhaps quite surprising is that nonetheless, an RC succeeds in making high quality forecasts, competitively with more intensively trained methods, even if not the leader. There remains an unanswered question as to why and how an RC works at all despite randomly selected weights. To this end, this work analyzes a further simplified RC, where the internal activation function is an identity function. Our simplification is not presented for the sake of tuning or improving an RC, but rather for the sake of analysis of what we take to be the surprise being not that it does not work better, but that such random methods work at all. We explicitly connect the RC with linear activation and linear readout to well developed time-series literature on vector autoregressive (VAR) averages that includes theorems on representability through the Wold theorem, which already performs reasonably for short-term forecasts. In the case of a linear activation and now popular quadratic readout RC, we explicitly connect to a nonlinear VAR, which performs quite well. Furthermore, we associate this paradigm to the now widely popular dynamic mode decomposition; thus, these three are in a sense different faces of the same thing. We illustrate our observations in terms of popular benchmark examples including Mackey-Glass differential delay equations and the Lorenz63 system.

Entities:  

Year:  2021        PMID: 33754755     DOI: 10.1063/5.0024890

Source DB:  PubMed          Journal:  Chaos        ISSN: 1054-1500            Impact factor:   3.642


  5 in total

1.  Reservoir computing with random and optimized time-shifts.

Authors:  Enrico Del Frate; Afroza Shirin; Francesco Sorrentino
Journal:  Chaos       Date:  2021-12       Impact factor: 3.642

2.  A machine-learning approach for long-term prediction of experimental cardiac action potential time series using an autoencoder and echo state networks.

Authors:  Shahrokh Shahi; Flavio H Fenton; Elizabeth M Cherry
Journal:  Chaos       Date:  2022-06       Impact factor: 3.741

3.  Decomposing predictability to identify dominant causal drivers in complex ecosystems.

Authors:  Kenta Suzuki; Shin-Ichiro S Matsuzaki; Hiroshi Masuya
Journal:  Proc Natl Acad Sci U S A       Date:  2022-10-10       Impact factor: 12.779

4.  Prediction of chaotic time series using recurrent neural networks and reservoir computing techniques: A comparative study.

Authors:  Shahrokh Shahi; Flavio H Fenton; Elizabeth M Cherry
Journal:  Mach Learn Appl       Date:  2022-04-09

5.  Connecting reservoir computing with statistical forecasting and deep neural networks.

Authors:  Lina Jaurigue; Kathy Lüdge
Journal:  Nat Commun       Date:  2022-01-11       Impact factor: 14.919

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.