Yunpeng Wang1, Lingxiao Zhou1,2, Mingming Wang3, Cheng Shao3, Lili Shi1, Shuyi Yang1, Zhiyong Zhang1, Mingxiang Feng4, Fei Shan1, Lei Liu1,5. 1. Shanghai Public Health Clinical Center and Institutes of Biomedical Sciences, Fudan University, Shanghai, China. 2. Department of Respiratory Medicine, Zhongshan-Xuhui Hospital, Fudan University, Shanghai, China. 3. School of Computer Science, Fudan University, Shanghai, China. 4. Chest Surgery Department, Zhongshan Hospital, Fudan University, Shanghai, China. 5. Shanghai University of Medicine & Health Sciences, Shanghai China.
Abstract
BACKGROUND: The efficient and accurate diagnosis of pulmonary adenocarcinoma before surgery is of considerable significance to clinicians. Although computed tomography (CT) examinations are widely used in practice, it is still challenging and time-consuming for radiologists to distinguish between different types of subcentimeter pulmonary nodules. Although there have been many deep learning algorithms proposed, their performance largely depends on vast amounts of data, which is difficult to collect in the medical imaging area. Therefore, we propose an automatic classification system for subcentimeter pulmonary adenocarcinoma, combining a convolutional neural network (CNN) and a generative adversarial network (GAN) to optimize clinical decision-making and to provide small dataset algorithm design ideas. METHODS: A total of 206 nodules with postoperative pathological labels were analyzed. Among them were 30 adenocarcinomas in situ (AISs), 119 minimally invasive adenocarcinomas (MIAs), and 57 invasive adenocarcinomas (IACs). Our system consisted of two parts, a GAN-based image synthesis, and a CNN classification. First, several popular existing GAN techniques were employed to augment the datasets, and comprehensive experiments were conducted to evaluate the quality of the GAN synthesis. Additionally, our classification system processes were based on two-dimensional (2D) nodule-centered CT patches without the need of manual labeling information. RESULTS: For GAN-based image synthesis, the visual Turing test showed that even radiologists could not tell the GAN-synthesized from the raw images (accuracy: primary radiologist 56%, senior radiologist 65%). For CNN classification, our progressive growing wGAN improved the performance of CNN most effectively (area under the curve =0.83). The experiments indicated that the proposed GAN augmentation method improved the classification accuracy by 23.5% (from 37.0% to 60.5%) and 7.3% (from 53.2% to 60.5%) in comparison with training methods using raw and common augmented images respectively. The performance of this combined GAN and CNN method (accuracy: 60.5%±2.6%) was comparable to the state-of-the-art methods, and our CNN was also more lightweight. CONCLUSIONS: The experiments revealed that GAN synthesis techniques could effectively alleviate the problem of insufficient data in medical imaging. The proposed GAN plus CNN framework can be generalized for use in building other computer-aided detection (CADx) algorithms and thus assist in diagnosis. 2020 Quantitative Imaging in Medicine and Surgery. All rights reserved.
BACKGROUND: The efficient and accurate diagnosis of pulmonary adenocarcinoma before surgery is of considerable significance to clinicians. Although computed tomography (CT) examinations are widely used in practice, it is still challenging and time-consuming for radiologists to distinguish between different types of subcentimeter pulmonary nodules. Although there have been many deep learning algorithms proposed, their performance largely depends on vast amounts of data, which is difficult to collect in the medical imaging area. Therefore, we propose an automatic classification system for subcentimeter pulmonary adenocarcinoma, combining a convolutional neural network (CNN) and a generative adversarial network (GAN) to optimize clinical decision-making and to provide small dataset algorithm design ideas. METHODS: A total of 206 nodules with postoperative pathological labels were analyzed. Among them were 30 adenocarcinomas in situ (AISs), 119 minimally invasive adenocarcinomas (MIAs), and 57 invasive adenocarcinomas (IACs). Our system consisted of two parts, a GAN-based image synthesis, and a CNN classification. First, several popular existing GAN techniques were employed to augment the datasets, and comprehensive experiments were conducted to evaluate the quality of the GAN synthesis. Additionally, our classification system processes were based on two-dimensional (2D) nodule-centered CT patches without the need of manual labeling information. RESULTS: For GAN-based image synthesis, the visual Turing test showed that even radiologists could not tell the GAN-synthesized from the raw images (accuracy: primary radiologist 56%, senior radiologist 65%). For CNN classification, our progressive growing wGAN improved the performance of CNN most effectively (area under the curve =0.83). The experiments indicated that the proposed GAN augmentation method improved the classification accuracy by 23.5% (from 37.0% to 60.5%) and 7.3% (from 53.2% to 60.5%) in comparison with training methods using raw and common augmented images respectively. The performance of this combined GAN and CNN method (accuracy: 60.5%±2.6%) was comparable to the state-of-the-art methods, and our CNN was also more lightweight. CONCLUSIONS: The experiments revealed that GAN synthesis techniques could effectively alleviate the problem of insufficient data in medical imaging. The proposed GAN plus CNN framework can be generalized for use in building other computer-aided detection (CADx) algorithms and thus assist in diagnosis. 2020 Quantitative Imaging in Medicine and Surgery. All rights reserved.
Entities:
Keywords:
Subcentimeter pulmonary adenocarcinoma diagnosis; computed tomography; data augmentation; deep convolutional neural networks; generative adversarial network (GAN)
Authors: Arnaud Arindra Adiyoso Setio; Alberto Traverso; Thomas de Bel; Moira S N Berens; Cas van den Bogaard; Piergiorgio Cerello; Hao Chen; Qi Dou; Maria Evelina Fantacci; Bram Geurts; Robbert van der Gugten; Pheng Ann Heng; Bart Jansen; Michael M J de Kaste; Valentin Kotov; Jack Yu-Hung Lin; Jeroen T M C Manders; Alexander Sóñora-Mengana; Juan Carlos García-Naranjo; Evgenia Papavasileiou; Mathias Prokop; Marco Saletta; Cornelia M Schaefer-Prokop; Ernst T Scholten; Luuk Scholten; Miranda M Snoeren; Ernesto Lopez Torres; Jef Vandemeulebroucke; Nicole Walasek; Guido C A Zuidhof; Bram van Ginneken; Colin Jacobs Journal: Med Image Anal Date: 2017-07-13 Impact factor: 8.545
Authors: Kenneth Clark; Bruce Vendt; Kirk Smith; John Freymann; Justin Kirby; Paul Koppel; Stephen Moore; Stanley Phillips; David Maffitt; Michael Pringle; Lawrence Tarbox; Fred Prior Journal: J Digit Imaging Date: 2013-12 Impact factor: 4.056
Authors: Ming Li; Vivek Narayan; Ritu R Gill; Jyothi P Jagannathan; Maria F Barile; Feng Gao; Raphael Bueno; Jagadeesan Jayender Journal: AJR Am J Roentgenol Date: 2017-10-18 Impact factor: 3.959
Authors: Ahmedin Jemal; Freddie Bray; Melissa M Center; Jacques Ferlay; Elizabeth Ward; David Forman Journal: CA Cancer J Clin Date: 2011-02-04 Impact factor: 508.702
Authors: Mateus N Aoki; Marla K Amarante; Carlos E C de Oliveira; Maria A E Watanabe Journal: Anticancer Agents Med Chem Date: 2018 Impact factor: 2.505