| Literature DB >> 30719445 |
Yuya Onishi1, Atsushi Teramoto1, Masakazu Tsujimoto2, Tetsuya Tsukamoto3, Kuniaki Saito1, Hiroshi Toyama3, Kazuyoshi Imaizumi3, Hiroshi Fujita4.
Abstract
Lung cancer is a leading cause of death worldwide. Although computed tomography (CT) examinations are frequently used for lung cancer diagnosis, it can be difficult to distinguish between benign and malignant pulmonary nodules on the basis of CT images alone. Therefore, a bronchoscopic biopsy may be conducted if malignancy is suspected following CT examinations. However, biopsies are highly invasive, and patients with benign nodules may undergo many unnecessary biopsies. To prevent this, an imaging diagnosis with high classification accuracy is essential. In this study, we investigate the automated classification of pulmonary nodules in CT images using a deep convolutional neural network (DCNN). We use generative adversarial networks (GANs) to generate additional images when only small amounts of data are available, which is a common problem in medical research, and evaluate whether the classification accuracy is improved by generating a large amount of new pulmonary nodule images using the GAN. Using the proposed method, CT images of 60 cases with confirmed pathological diagnosis by biopsy are analyzed. The benign nodules assessed in this study are difficult for radiologists to differentiate because they cannot be rejected as being malignant. A volume of interest centered on the pulmonary nodule is extracted from the CT images, and further images are created using axial sections and augmented data. The DCNN is trained using nodule images generated by the GAN and then fine-tuned using the actual nodule images to allow the DCNN to distinguish between benign and malignant nodules. This pretraining and fine-tuning process makes it possible to distinguish 66.7% of benign nodules and 93.9% of malignant nodules. These results indicate that the proposed method improves the classification accuracy by approximately 20% in comparison with training using only the original images.Entities:
Mesh:
Year: 2019 PMID: 30719445 PMCID: PMC6334309 DOI: 10.1155/2019/6051939
Source DB: PubMed Journal: Biomed Res Int Impact factor: 3.411
Figure 1Outline of the proposed method to distinguish between benign and malignant nodules.
Figure 2Examples of analytical cases.
Figure 3Architecture of the GAN used for nodule generation.
Figure 4Architecture of the DCNN used for pulmonary nodule classification.
Figure 5Examples of images generated using WGAN.
Number of images in each dataset for cross-validation.
| Train | Type | Set 1 | Set 2 | Set 3 | |||
|---|---|---|---|---|---|---|---|
| Original | Augmented | Original | Augmented | Original | Augmented | ||
| WGAN | Benign | 5 | 1280 | 6 | 1536 | 6 | 1536 |
| Malignant | 11 | 2816 | 11 | 2816 | 11 | 2816 | |
|
| |||||||
| DCNN | Benign | 9 | 2304 | 9 | 2304 | 9 | 2304 |
| Malignant | 11 | 2816 | 11 | 2816 | 11 | 2816 | |
Classification results for various numbers of images used for pretraining.
| The number of generated images | Classification accuracy [%] | |
|---|---|---|
| Benign | Malignant | |
| 0 | 51.9 | 84.9 |
| 20,000 | 51.9 | 93.9 |
| 40,000 | 63.0 | 93.9 |
| 60,000 | 66.7 | 93.9 |
| 80,000 | 55.6 | 84.8 |
| 100,000 | 63.0 | 84.8 |
Classification result by the pretraining method.
| Pretraining method | Data augmentation | Classification accuracy [%] | |
|---|---|---|---|
| Benign | Malignant | ||
| None | No | 25.9 | 78.8 |
| ImageNet | No | 33.3 | 87.9 |
| WGAN (60000) | No | 63.0 | 81.8 |
| None | Yes | 51.9 | 84.9 |
| ImageNet | Yes | 40.7 | 93.9 |
| WGAN (60000) | Yes | 66.7 | 93.9 |
Figure 6ROC curve of the proposed method.
Classification result by difference in network models.
| Model | Pretraining method | Classification accuracy [%] | ||
|---|---|---|---|---|
| Benign | Malignant | Overall | ||
| Proposed method | None | 51.9 | 84.9 | 70.0 |
| ImageNet | 40.7 | 93.9 | 70.0 | |
| WGAN (60000) | 66.7 | 93.9 | 81.7 | |
|
| ||||
| GoogLeNet | None | 40.7 | 84.9 | 65.0 |
| ImageNet | 48.2 | 87.9 | 70.0 | |
| WGAN (60000) | 48.2 | 97.0 | 75.0 | |
|
| ||||
| VGG16 | None | 29.6 | 90.9 | 63.3 |
| ImageNet | 33.3 | 87.9 | 63.3 | |
| WGAN (60000) | 48.2 | 84.9 | 68.3 | |
Figure 7Sample images of correctly classified and misclassified nodules.