| Literature DB >> 33385699 |
Krishna Chaitanya1, Neerav Karani2, Christian F Baumgartner2, Ertunc Erdil2, Anton Becker3, Olivio Donati3, Ender Konukoglu2.
Abstract
Supervised learning-based segmentation methods typically require a large number of annotated training data to generalize well at test time. In medical applications, curating such datasets is not a favourable option because acquiring a large number of annotated samples from experts is time-consuming and expensive. Consequently, numerous methods have been proposed in the literature for learning with limited annotated examples. Unfortunately, the proposed approaches in the literature have not yet yielded significant gains over random data augmentation for image segmentation, where random augmentations themselves do not yield high accuracy. In this work, we propose a novel task-driven data augmentation method for learning with limited labeled data where the synthetic data generator, is optimized for the segmentation task. The generator of the proposed method models intensity and shape variations using two sets of transformations, as additive intensity transformations and deformation fields. Both transformations are optimized using labeled as well as unlabeled examples in a semi-supervised framework. Our experiments on three medical datasets, namely cardiac, prostate and pancreas, show that the proposed approach significantly outperforms standard augmentation and semi-supervised approaches for image segmentation in the limited annotation setting. The code is made publicly available at https://github.com/krishnabits001/task_driven_data_augmentation.Keywords: Data augmentation; Deep learning; Machine learning; Medical image segmentation; Semi-supervised learning
Mesh:
Year: 2020 PMID: 33385699 DOI: 10.1016/j.media.2020.101934
Source DB: PubMed Journal: Med Image Anal ISSN: 1361-8415 Impact factor: 8.545