Literature DB >> 31005851

Continuous learning in single-incremental-task scenarios.

Davide Maltoni1, Vincenzo Lomonaco2.   

Abstract

It was recently shown that architectural, regularization and rehearsal strategies can be used to train deep models sequentially on a number of disjoint tasks without forgetting previously acquired knowledge. However, these strategies are still unsatisfactory if the tasks are not disjoint but constitute a single incremental task (e.g., class-incremental learning). In this paper we point out the differences between multi-task and single-incremental-task scenarios and show that well-known approaches such as LWF, EWC and SI are not ideal for incremental task scenarios. A new approach, denoted as AR1, combining architectural and regularization strategies is then specifically proposed. AR1 overhead (in terms of memory and computation) is very small thus making it suitable for online learning. When tested on CORe50 and iCIFAR-100, AR1 outperformed existing regularization strategies by a good margin.
Copyright © 2019 Elsevier Ltd. All rights reserved.

Entities:  

Keywords:  Continuous learning; Deep learning; Incremental class learning; Lifelong learning; Object recognition; Single-incremental-task

Mesh:

Year:  2019        PMID: 31005851     DOI: 10.1016/j.neunet.2019.03.010

Source DB:  PubMed          Journal:  Neural Netw        ISSN: 0893-6080


  3 in total

1.  Towards in vivo neural decoding.

Authors:  Daniel Valencia; Amir Alimohammad
Journal:  Biomed Eng Lett       Date:  2022-02-10

2.  Catastrophic Forgetting in Deep Graph Networks: A Graph Classification Benchmark.

Authors:  Antonio Carta; Andrea Cossu; Federico Errica; Davide Bacciu
Journal:  Front Artif Intell       Date:  2022-02-04

3.  Is Class-Incremental Enough for Continual Learning?

Authors:  Andrea Cossu; Gabriele Graffieti; Lorenzo Pellegrini; Davide Maltoni; Davide Bacciu; Antonio Carta; Vincenzo Lomonaco
Journal:  Front Artif Intell       Date:  2022-03-24
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.