Literature DB >> 32503027

Robustness study of noisy annotation in deep learning based medical image segmentation.

Shaode Yu1, Mingli Chen2, Erlei Zhang3, Junjie Wu4, Hang Yu5, Zi Yang6, Lin Ma7, Xuejun Gu8, Weiguo Lu9.   

Abstract

Partly due to the use of exhaustive-annotated data, deep networks have achieved impressive performance on medical image segmentation. Medical imaging data paired with noisy annotation are, however, ubiquitous, but little is known about the effect of noisy annotation on deep learning based medical image segmentation. We studied the effect of noisy annotation in the context of mandible segmentation from CT images. First, 202 images of Head & Neck cancer patients were collected from our clinical database, where the organs-at-risk were annotated by one of twelve planning dosimetrists. The mandibles were roughly annotated as the planning avoiding structure. Then, mandible labels were checked and corrected by a Head & Neck specialist to get the reference standard. At last, by varying the ratios of noisy labels in the training set, deep networks were trained and tested for mandible segmentation. The trained models were further tested on other two public data sets. Experimental results indicated that the network trained with noisy labels had worse segmentation than that trained with reference standard, and in general, fewer noisy labels led to better performance. When using 20% or less noisy cases for training, no significant difference was found on the segmentation results between the models trained by noisy or reference annotation. Cross-dataset validation results verified that the models trained with noisy data achieved competitive performance to that trained with reference standard. This study suggests that the involved network is robust to noisy annotation to some extent in mandible segmentation from CT images. It also highlights the importance of labeling quality in deep learning. In the future work, extra attention should be paid on how to utilize a small number of reference standard samples to improve the performance of deep learning with noisy annotation.
© 2020 Institute of Physics and Engineering in Medicine.

Entities:  

Keywords:  deep learning; medical image segmentation; noisy annotation; radiation oncology

Year:  2020        PMID: 32503027     DOI: 10.1088/1361-6560/ab99e5

Source DB:  PubMed          Journal:  Phys Med Biol        ISSN: 0031-9155            Impact factor:   3.609


  5 in total

1.  SinGAN-Seg: Synthetic training data generation for medical image segmentation.

Authors:  Vajira Thambawita; Pegah Salehi; Sajad Amouei Sheshkal; Steven A Hicks; Hugo L Hammer; Sravanthi Parasa; Thomas de Lange; Pål Halvorsen; Michael A Riegler
Journal:  PLoS One       Date:  2022-05-02       Impact factor: 3.752

2.  Deep learning-based medical image segmentation with limited labels.

Authors:  Weicheng Chi; Lin Ma; Junjie Wu; Mingli Chen; Weiguo Lu; Xuejun Gu
Journal:  Phys Med Biol       Date:  2020-11-20       Impact factor: 3.609

3.  Validation for measurements of skeletal muscle areas using low-dose chest computed tomography.

Authors:  Woo Hyeon Lim; Chang Min Park
Journal:  Sci Rep       Date:  2022-01-10       Impact factor: 4.379

4.  Deep learning-based classification and structure name standardization for organ at risk and target delineations in prostate cancer radiotherapy.

Authors:  Christian Jamtheim Gustafsson; Michael Lempart; Johan Swärd; Emilia Persson; Tufve Nyholm; Camilla Thellenberg Karlsson; Jonas Scherman
Journal:  J Appl Clin Med Phys       Date:  2021-10-08       Impact factor: 2.102

5.  Generalising from conventional pipelines using deep learning in high-throughput screening workflows.

Authors:  Javier Jarazo; Andreas Husch; Beatriz Garcia Santa Cruz; Jan Slter; Gemma Gomez-Giro; Claudia Saraiva; Sonia Sabate-Soler; Jennifer Modamio; Kyriaki Barmpa; Jens Christian Schwamborn; Frank Hertel
Journal:  Sci Rep       Date:  2022-07-06       Impact factor: 4.996

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.