| Literature DB >> 32134949 |
Atsushi Teramoto1,2, Tetsuya Tsukamoto2, Ayumi Yamada1, Yuka Kiriyama2, Kazuyoshi Imaizumi2, Kuniaki Saito1, Hiroshi Fujita3.
Abstract
Cytology is the first pathological examination performed in the diagnosis of lung cancer. In our previous study, we introduced a deep convolutional neural network (DCNN) to automatically classify cytological images as images with benign or malignant features and achieved an accuracy of 81.0%. To further improve the DCNN's performance, it is necessary to train the network using more images. However, it is difficult to acquire cell images which contain a various cytological features with the use of many manual operations with a microscope. Therefore, in this study, we aim to improve the classification accuracy of a DCNN with the use of actual and synthesized cytological images with a generative adversarial network (GAN). Based on the proposed method, patch images were obtained from a microscopy image. Accordingly, these generated many additional similar images using a GAN. In this study, we introduce progressive growing of GANs (PGGAN), which enables the generation of high-resolution images. The use of these images allowed us to pretrain a DCNN. The DCNN was then fine-tuned using actual patch images. To confirm the effectiveness of the proposed method, we first evaluated the quality of the images which were generated by PGGAN and by a conventional deep convolutional GAN. We then evaluated the classification performance of benign and malignant cells, and confirmed that the generated images had characteristics similar to those of the actual images. Accordingly, we determined that the overall classification accuracy of lung cells was 85.3% which was improved by approximately 4.3% compared to a previously conducted study without pretraining using GAN-generated images. Based on these results, we confirmed that our proposed method will be effective for the classification of cytological images in cases at which only limited data are acquired.Entities:
Year: 2020 PMID: 32134949 PMCID: PMC7058306 DOI: 10.1371/journal.pone.0229951
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Fig 1Schematic of the study outline.
Fig 2Generation and augmentation of patch images.
Fig 3Progressive growing of generative adversarial network (GAN) (PGGAN) training.
Fig 4Network architecture used for classification.
Fig 5Real and synthesized images.
Confusion matrix of the proposed method.
| Estimated | Overall accuracy | |||
|---|---|---|---|---|
| Benign | Malignant | |||
| 261 | 45 | 0.853 | ||
| 46 | 268 | |||
Fig 6Receiver operating characteristic (ROC) curves of the three pretraining methods.
Comparison of pretraining methods.
| Pretraining method | Sensitivity | Specificity | Overall accuracy | Az |
|---|---|---|---|---|
| 0.850 | 0.768 | 0.810 | 0.872 | |
| 0.793 | 0.797 | 0.795 | 0.867 | |
| 0.854 | 0.853 | 0.853 | 0.901 |
Fig 7Cells correctly classified and misclassified by the previous and proposed methods.