Yuya Onishi1, Atsushi Teramoto2, Masakazu Tsujimoto3, Tetsuya Tsukamoto4, Kuniaki Saito1, Hiroshi Toyama4, Kazuyoshi Imaizumi4, Hiroshi Fujita5. 1. Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake cho, Toyoake City, Aichi, 470-1192, Japan. 2. Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake cho, Toyoake City, Aichi, 470-1192, Japan. teramoto@fujita-hu.ac.jp. 3. Fujita Health University Hospital, 1-98 Dengakugakubo, Kutsukake cho, Toyoake City, Aichi, 470-1192, Japan. 4. School of Medicine, Fujita Health University, 1-98 Dengakugakubo, Kutsukake cho, Toyoake City, Aichi, 470-1192, Japan. 5. Gifu University, 1-1 Yanagido, Gifu, 501-1194, Japan.
Abstract
PURPOSE: Early detection and treatment of lung cancer holds great importance. However, pulmonary-nodule classification using CT images alone is difficult to realize. To address this concern, a method for pulmonary-nodule classification based on a deep convolutional neural network (DCNN) and generative adversarial networks (GAN) has previously been proposed by the authors. In that method, the said classification was performed exclusively using axial cross sections of pulmonary nodules. During actual medical-examination procedures, however, a comprehensive judgment can only be made via observation of various pulmonary-nodule cross sections. In the present study, a comprehensive analysis was performed by extending the application of the previously proposed DCNN- and GAN-based automatic classification method to multiple cross sections of pulmonary nodules. METHODS: Using the proposed method, CT images of 60 cases with confirmed pathological diagnosis by biopsy are analyzed. Firstly, multiplanar images of the pulmonary nodule are generated. Classification training was performed for three DCNNs. A certain pretraining was initially performed using GAN-generated nodule images. This was followed by fine-tuning of each pretrained DCNN using original nodule images provided as input. RESULTS: As a result of the evaluation, the specificity was 77.8% and the sensitivity was 93.9%. Additionally, the specificity was observed to have improved by 11.1% without any reduction in the sensitivity, compared to our previous report. CONCLUSION: This study reports development of a comprehensive analysis method to classify pulmonary nodules at multiple sections using GAN and DCNN. The effectiveness of the proposed discrimination method based on use of multiplanar images has been demonstrated to be improved compared to that realized in a previous study reported by the authors. In addition, the possibility of enhancing classification accuracy via application of GAN-generated images, instead of data augmentation, for pretraining even for medical datasets that contain relatively few images has also been demonstrated.
PURPOSE: Early detection and treatment of lung cancer holds great importance. However, pulmonary-nodule classification using CT images alone is difficult to realize. To address this concern, a method for pulmonary-nodule classification based on a deep convolutional neural network (DCNN) and generative adversarial networks (GAN) has previously been proposed by the authors. In that method, the said classification was performed exclusively using axial cross sections of pulmonary nodules. During actual medical-examination procedures, however, a comprehensive judgment can only be made via observation of various pulmonary-nodule cross sections. In the present study, a comprehensive analysis was performed by extending the application of the previously proposed DCNN- and GAN-based automatic classification method to multiple cross sections of pulmonary nodules. METHODS: Using the proposed method, CT images of 60 cases with confirmed pathological diagnosis by biopsy are analyzed. Firstly, multiplanar images of the pulmonary nodule are generated. Classification training was performed for three DCNNs. A certain pretraining was initially performed using GAN-generated nodule images. This was followed by fine-tuning of each pretrained DCNN using original nodule images provided as input. RESULTS: As a result of the evaluation, the specificity was 77.8% and the sensitivity was 93.9%. Additionally, the specificity was observed to have improved by 11.1% without any reduction in the sensitivity, compared to our previous report. CONCLUSION: This study reports development of a comprehensive analysis method to classify pulmonary nodules at multiple sections using GAN and DCNN. The effectiveness of the proposed discrimination method based on use of multiplanar images has been demonstrated to be improved compared to that realized in a previous study reported by the authors. In addition, the possibility of enhancing classification accuracy via application of GAN-generated images, instead of data augmentation, for pretraining even for medical datasets that contain relatively few images has also been demonstrated.
Authors: Denise R Aberle; Amanda M Adams; Christine D Berg; William C Black; Jonathan D Clapp; Richard M Fagerstrom; Ilana F Gareen; Constantine Gatsonis; Pamela M Marcus; JoRean D Sicks Journal: N Engl J Med Date: 2011-06-29 Impact factor: 91.245
Authors: Francesco Ciompi; Kaman Chung; Sarah J van Riel; Arnaud Arindra Adiyoso Setio; Paul K Gerke; Colin Jacobs; Ernst Th Scholten; Cornelia Schaefer-Prokop; Mathilde M W Wille; Alfonso Marchianò; Ugo Pastorino; Mathias Prokop; Bram van Ginneken Journal: Sci Rep Date: 2017-04-19 Impact factor: 4.379