| Literature DB >> 34306963 |
Connor Shorten1, Taghi M Khoshgoftaar1, Borko Furht1.
Abstract
Natural Language Processing (NLP) is one of the most captivating applications of Deep Learning. In this survey, we consider how the Data Augmentation training strategy can aid in its development. We begin with the major motifs of Data Augmentation summarized into strengthening local decision boundaries, brute force training, causality and counterfactual examples, and the distinction between meaning and form. We follow these motifs with a concrete list of augmentation frameworks that have been developed for text data. Deep Learning generally struggles with the measurement of generalization and characterization of overfitting. We highlight studies that cover how augmentations can construct test sets for generalization. NLP is at an early stage in applying Data Augmentation compared to Computer Vision. We highlight the key differences and promising ideas that have yet to be tested in NLP. For the sake of practical implementation, we describe tools that facilitate Data Augmentation such as the use of consistency regularization, controllers, and offline and online augmentation pipelines, to preview a few. Finally, we discuss interesting topics around Data Augmentation in NLP such as task-specific augmentations, the use of prior knowledge in self-supervised learning versus Data Augmentation, intersections with transfer and multi-task learning, and ideas for AI-GAs (AI-Generating Algorithms). We hope this paper inspires further research interest in Text Data Augmentation.Entities:
Keywords: Big Data; Data Augmentation; NLP; Natural Language Processing; Overfitting; Text Data
Year: 2021 PMID: 34306963 PMCID: PMC8287113 DOI: 10.1186/s40537-021-00492-0
Source DB: PubMed Journal: J Big Data ISSN: 2196-1115
Fig. 1Success of EDA applied to 5 text classification datasets. A key takeaway from these results is the performance difference with less data. The gain is much more pronounced with 500 labeled examples, compared to 5,000 or the full training set
Fig. 2Examples of easy data augmentation transformations
Fig. 3Left, word-level mixup. Right, sentence-level mixup. The red outline highlights where augmentation occurs in the processing pipeline
Fig. 4Directions for feature space augmentation explored in MODALS
Fig. 5Fooled by injected text. Image taken from Jia and Liang [89]
Fig. 6Unsupervised data augmentation schema. Image taken from Xie et al. [105]
Fig. 7Developing attacks in TextAttack [119]