| Literature DB >> 32953512 |
Xiang Wang1, Qingchu Li1, Jiali Cai1, Wei Wang1, Peng Xu2, Yiqian Zhang2, Qu Fang2, Chicheng Fu2, Li Fan1, Yi Xiao1, Shiyuan Liu1.
Abstract
BACKGROUND: Due to different treatment method and prognosis of different subtypes of lung adenocarcinomas appearing as ground-glass nodules (GGNs) on computed tomography (CT) scan, it is important to classify invasive adenocarcinomas from non-invasive adenocarcinomas. The purpose of this paper is to build and evaluate the performance of deep learning networks on the differentiation the invasiveness of lung adenocarcinoma appearing as GGNs.Entities:
Keywords: Deep learning; computed tomography (CT); ground glass opacity; pulmonary adenocarcinomas; radiomics; tumor invasiveness
Year: 2020 PMID: 32953512 PMCID: PMC7481614 DOI: 10.21037/tlcr-20-370
Source DB: PubMed Journal: Transl Lung Cancer Res ISSN: 2218-6751
Number of nodules for training, validation and testing
| Group | Training | Validation | Testing | Total |
|---|---|---|---|---|
| AAH | 95 | 20 | 20 | 135 |
| AIS | 128 | 28 | 28 | 184 |
| MIA | 145 | 31 | 31 | 207 |
| IAC | 252 | 54 | 54 | 360 |
| Total | 620 | 133 | 133 | 886 |
AAH, atypical adenomatous hyperplasia; AIS, adenocarcinoma in situ; MIA, minimally invasive adenocarcinoma; IAC, invasive adenocarcinoma.
Figure 1The structure and building block of XimaNet. (A) Structure of XimaNet. Convolutional neural network (CNN) algorithm development for classification, 3D patches with size of 64×64×64 pixel were used as input. They were first fed into a BN-convolution-BN module with 64 kernels. These feature maps then went through 6 building blocks followed by a GAP module. (B) Structure of building block of XimaNet. The first building block used a convolution with stride of 1 while the other building blocks used stride of 2 for down sampling.
Figure 2The structure of fully connected layer network in Deep-RadNet. The numbers below each layer is the number of neurons.
Classification performance of three network models
| Group | Network | Accuracy | F1AVG | MCC |
|---|---|---|---|---|
| AAH/AIS | XimaNet | 0.701 | 0.632 | 0.391 |
| XimaSharp | 0.663 | 0.614 | 0.376 | |
| Deep-RadNet | 0.746 | 0.709 | 0.452 | |
| MIA | XimaNet | 0.657 | 0.645 | 0.388 |
| XimaSharp | 0.635 | 0.617 | 0.371 | |
| Deep-RadNet | 0.754 | 0.693 | 0.447 | |
| (AAH/AIS/MIA) | XimaNet | 0.755 | 0.677 | 0.431 |
| XimaSharp | 0.735 | 0.662 | 0.428 | |
| Deep-RadNet | 0.837 | 0.771 | 0.513 |
AAH, atypical adenomatous hyperplasia; AIS, adenocarcinoma in situ; MIA, minimally invasive adenocarcinoma; IAC, invasive adenocarcinoma.
Figure 3The ROCs and AUCs of classification tasks. (A) Receiver operating characteristic curve (ROC) of AAH/AIS versus MIA. (B) ROC of MIA versus IAC. (C) The ROC of AAH/AIS&MIA versus IA.
Figure 4The figure illustrated the results for algorithm learning and automatic segmentation. The first to last columns were lung nodule examples selected from AAH, AIS, MIA and IAC respectively. (A) The first row showed the original CT images of tumor area. (B) The second row showed the heat maps of the corresponding tumor area. Grad-CAM method was used to visualize the region of interest learned by XimaNet. The color bar on the most right illustrated the attention degree the algorithm paid on. (C) The third row was the segmentation result predicted by XimaSharp (red circle areas were the automatic segmentation result and blue circle areas were the ground truth).