Literature DB >> 29346995

Difference between memory and prediction in linear recurrent networks.

Sarah Marzen1.   

Abstract

Recurrent networks are trained to memorize their input better, often in the hopes that such training will increase the ability of the network to predict. We show that networks designed to memorize input can be arbitrarily bad at prediction. We also find, for several types of inputs, that one-node networks optimized for prediction are nearly at upper bounds on predictive capacity given by Wiener filters and are roughly equivalent in performance to randomly generated five-node networks. Our results suggest that maximizing memory capacity leads to very different networks than maximizing predictive capacity and that optimizing recurrent weights can decrease reservoir size by half an order of magnitude.

Year:  2017        PMID: 29346995     DOI: 10.1103/PhysRevE.96.032308

Source DB:  PubMed          Journal:  Phys Rev E        ISSN: 2470-0045            Impact factor:   2.529


  2 in total

1.  Infinitely large, randomly wired sensors cannot predict their input unless they are close to deterministic.

Authors:  Sarah Marzen
Journal:  PLoS One       Date:  2018-08-29       Impact factor: 3.240

2.  Echo State Networks with Self-Normalizing Activations on the Hyper-Sphere.

Authors:  Pietro Verzelli; Cesare Alippi; Lorenzo Livi
Journal:  Sci Rep       Date:  2019-09-25       Impact factor: 4.379

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.