| Literature DB >> 28292907 |
James Kirkpatrick1, Razvan Pascanu2, Neil Rabinowitz2, Joel Veness2, Guillaume Desjardins2, Andrei A Rusu2, Kieran Milan2, John Quan2, Tiago Ramalho2, Agnieszka Grabska-Barwinska2, Demis Hassabis2, Claudia Clopath3, Dharshan Kumaran2, Raia Hadsell2.
Abstract
The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially.Keywords: artificial intelligence; continual learning; deep learning; stability plasticity; synaptic consolidation
Mesh:
Year: 2017 PMID: 28292907 PMCID: PMC5380101 DOI: 10.1073/pnas.1611835114
Source DB: PubMed Journal: Proc Natl Acad Sci U S A ISSN: 0027-8424 Impact factor: 11.205