| Literature DB >> 35406614 |
Dalia Fahmy1, Heba Kandil2,3, Adel Khelifi4, Maha Yaghi5, Mohammed Ghazal5, Ahmed Sharafeldeen2, Ali Mahmoud2, Ayman El-Baz2.
Abstract
Pulmonary nodules are the precursors of bronchogenic carcinoma, its early detection facilitates early treatment which save a lot of lives. Unfortunately, pulmonary nodule detection and classification are liable to subjective variations with high rate of missing small cancerous lesions which opens the way for implementation of artificial intelligence (AI) and computer aided diagnosis (CAD) systems. The field of deep learning and neural networks is expanding every day with new models designed to overcome diagnostic problems and provide more applicable and simply used models. We aim in this review to briefly discuss the current applications of AI in lung segmentation, pulmonary nodule detection and classification.Entities:
Keywords: artificial intelligence; deep learning; neural networks; pulmonary nodule
Year: 2022 PMID: 35406614 PMCID: PMC8997734 DOI: 10.3390/cancers14071840
Source DB: PubMed Journal: Cancers (Basel) ISSN: 2072-6694 Impact factor: 6.639
Figure 1A Typical CAD System for Lung Cancer Diagnosis.
Figure 2Main Categories of Lung Segmentation.
Literature reviews of lung segmentation system using Hounsfield unit (HU) threshold, deformable boundaries, shape models, region/edge-based models, or machine learning (ML) based methods.
| Study | Method | # Subjects | System Evaluation |
|---|---|---|---|
| Amato et al. [ | 17 CT patients. | The area under the ROC curve (AUC) of the system was | |
| Hu et al. [ | eight normal CT patients. | The average intrasubject change was | |
| Itai et al. [ | 9 CT Patients. | Qualitative evaluation only. | |
| Silveria et al. [ | Stack of chest CT slices. | Qualitative evaluation only. | |
| Gao et al. [ | eight CT scans. | The average overlap coefficient of the system was | |
| Pu et al. [ | 20 CT patients. | Average over-segmentation and under-segmentation ratio were | |
| Korfiatis et al. [ | 22 CT patients. | The mean overlap coefficient of the system was higher than | |
| Wang et al. [ | 76 CT patients. | The mean overlap coefficient of the system was | |
| Van Rikxoort et al. [ | 100 CT Patients. | The accuracy of the system was | |
| Wei et al. [ | nine CT patients. | The accuracy range of the system was | |
| Ye et al. [ | 108 CT patients. | The average detection rate of the system was | |
| Sun et al. [ | 60 CT patients. | The Dice similarity coefficient (DSC) and mean absolute surface distance of the system were | |
| Sofka et al. [ | 260 CT patients. | The errors in segmenting left and right lung were | |
| Hua et al. [ | Graph-based search algorithm. | 19 pathological lung CT patients. | The sensitivity, specificity, and Hausdorff distance of the system were |
| Nakagomi et al. [ | Min-cut graph algorithm. | 97 CT patients | The sensitivity and Jaccard index of the system were |
| Mansoor et al. [ | more than 400 CT patients. | The DSC, Hausdorff distance, sensitivity, and specificity of the system were | |
| Yan et al. [ | Convolution neural network (CNN). | 861 CT COVID-19 patients. | The system achieved DSC of |
| Fan et al. [ | 100 CT images. | The DSC (sensitivity, specificity) of Inf-Net and Semi-Inf-Net were | |
| Oulefki et al. [ | Multi-level entropy-based threshold approach. | 297 CT COVID-19 patients. | The DSC, sensitivity, specificity, and precision of the system were |
| Sharafeldeen et al. [ | 32 CT COVID-19 patients. | The Overlap coefficient, DSC, absolute lung volume difference (ALVD), and 95th-percentile bidirectional Hausdorff distance (BHD) were | |
| Zhao et al. [ | 112 CT patients. | DSC, sensitivity, specificity, and mean surface distance error of the system were | |
| Sousa et al. [ | Hybrid deep learning model, consisted of U-Net [ | 385 CT patients, collected from five different datasets. | The mean DSC of the system was higher than |
| Kim et al. [ | Otsu’s algorithm. | 447 CT patients. | Sensitivity, specificity, accuracy, AUC, and F1-score of the system were |
Literature review of pulmonary nodule detection and segmentation systems.
| Study | Method | # Subjects | System Evaluation |
|---|---|---|---|
| Brown et al. [ | 31 CT patients. | The accuracy of the system was | |
| Oda et al. [ | 33 CT patients. | The accuracy of the system was | |
| Chang et al. [ | eight CT patients. | The detection rate of the system was | |
| Way et al. [ | 96 CT patients. | Qualitative evaluation only. | |
| Kuhnigk et al. [ | Automatic morphological and partial volume analysis based method. | Low-dose data from 8 clinical metastasis patients. | Results of proposed method outperformed conventional methods both systematic and absolute errors were substantially reduced. Method could successfully account for slice thickness and variations of kernel reconstruction compared to conventional methods. |
| Zhou et al. [ | 10 ground Glass Opacity nodules. | All 10 nodules detected with only 1 false positive nodule. | |
| Dehmeshki et al. [ | Adaptive sphericity oriented contrast region growing on the fuzzy connectivity map of the object of interest. | Visual inspection found that | |
| Tao et al. [ | A multi-level statistical learning-based approach for segmentation and detection of ground glass nodule. | Database: 1100 subvolumes (100 contains ground glass nodule) acquired from 200 subjects. | Classification accuracy: |
| Messay et al. [ | 84 CT patients. | The sensitivity of the system was | |
| Kubota et al. [ | Region Growing. | ||
| Liu et al. [ | 24 CT patients. | The sensitivity of the system was | |
| Choi et al. [ | 84 CT patients. | The sensitivity of the system was | |
| Alilou et al. [ | 60 CT patients. | The sensitivity of the system was | |
| Bai et al. [ | 99 CT patients | The number of false positive were reduced by more than | |
| Setio et al. [ | 888 CT patients. | The sensitivity of the system was | |
| Bai et al. [ | 99 CT patients | The number of false positive were reduced by more than | |
| Setio et al. [ | 888 CT patients. | The sensitivity of the system was | |
| Akram et al. [ | 84 CT patients. | The accuracy and sensitivity of the system were | |
| Golan et al. [ | Deep convolutional neural network (CNN). | 1018 CT patients | The sensitivity of the system was |
| Bergtholdt et al. [ | 1018 CT patients. | The sensitivity of the system was | |
| Sudipta Mukhopadhyay [ | Thresholding approach based on internal texture (solid/part-solid and non-solid), and external attachment (juxta-plural and juxta-vascular). | 891 nodules from (LIDC/IDRI). | Average segmentation accuracy: |
| El-Regaily et al. [ | 400 CT patients. | The accuracy, sensitivity, and specificity of the system were | |
| Zhang et al. [ | Deep believe network (DBN). | 1018 CT patients. | The accuracy of system was |
| Wang et al. [ | Semi-supervised extreme learning machines (SS-ELM) | 1018 CT patients. | The accuracy of the system was |
| Zhao et al. [ | 800 CT scans. | Qualitative evaluation only. | |
| Charbonnier et al. [ | Subsolid nodule segmentation using voxel classification that eliminated blood vessels. | 170 subsolid nodules from the Multicentric Italian Lung Disease trial. | |
| Luo et al. [ | 3D sphere center-points matching detection network (SCPM-Net). | 888 CT scans. | The sensitivity of the system was |
| Yin et al. [ | Squeeze and attention, and dense atrous spatial pyramid pooling U-Net (SD-U-Net). | 2236 CT slices. | The Dice similarity coefficient (DSC), sensitivity, specificity, and accuracy of the system were |
| Bianconi et al. [ | Semi-automated deep learning methods outperformed the conventional methods. DSCs of the deep learning based methods recorded |
Figure 3Main Categories of Lung Nodule Classification.
Literature review of pulmonary nodule classification systems.
| Study | Method | # Subjects | System Evaluation |
|---|---|---|---|
| Dehmeshki et al. [ | Shape-based region growing. | 3D lung CT data where nodules are attached to blood vessels or lung wall. | Qualitative evaluation only. |
| Lee et al. [ | Commercial CAD system (IQQA-Chest, EDDA Technology, Princeton Junction, NJ, USA). | 200 chest radiographs (100 normal, 100 with malignant solitary nodules. | Sensitivity of |
| Kuruvilla et al. [ | Feed forward and feed forward back propagation neural networks. | 155 patients from LIDC | Classification accuracy of |
| Yamamoto et al. [ | Random forest. | 172 patients with NSCLC. | Sensitivity of |
| Orozco et al. [ | 45 CT scans from ELCAP and LIDC. | Total preciseness in classifying cancerous from non-cancerous nodules was | |
| Kumar et al. [ | Deep Features using autoencoder. | 4323 nodules from NCI-LIDC dataset. | |
| Hua et al. [ | LIDC | Sensitivity (DBN: | |
| Kang et al. [ | 3D multi-view CNN (MV-CNN). | LIDC-IDRI | Error rate of |
| Ciompi et al. [ | Multi-stream multi-scale convolutional networks. | Best accuracy of | |
| Song et al. [ | LIDC-IDRI | Accuracy of | |
| Tajbakhsh et al. [ | LDCT acquired from 31 patients. | AUC = | |
| Li et al. [ | Support vector machine (SVM). | 248 GGNs. | Accuracy of classifying GGNs into atypical adenomatous hyperplasia (AAH), adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), and invasive adenocarcinoma (IA) was |
| Huang et al. [ | Dense convolutional network (DenseNet). | Error rates for CIFAR (C10: | |
| Nibali et al. [ | ResNet | LIDC/IDRI | Sensitivity of |
| Liu et al. [ | Multi-view multi-scale CNNs | LIDC-IDRI and ELCAP | Classification rate as |
| Zhao et al. [ | A deep learning system based on 3D CNNs and multitask learning | 651 nodules with labels of AAH, AIS, MIA, IA. | Classification accuracy using 3 class weighted average F1 score is: |
| Li et al. [ | Multivariable linear predictor model built on semantic features. | 100 patients from NLST-LDCT. | AUC at baseline screening: |
| Lyu et al. [ | Multi-level CNN (ML-CNN). | LIDC, IDRI (1018 cases from 1010 patients) | Accuracy: |
| Shaffie et al. [ | 727 nodules from 467 patients (LIDC). | Classification accuracy of | |
| Causey et al. [ | Deep learning CNN. | LIDC-IDRI | Accuracy of malignancy classification with AUC of approximately of |
| Uthoff et al. [ | k-medoids clustering and information theory. | Training: (74 malignant, 289 benign), Validation (50 malignant, 50 benign). | AUC = |
| Ardila et al. [ | A deep learning CNN. | 6716 National Lung Cancer Screening Trial cases, independent clinical validation set of 1139 cases. | AUC = |
| Liu et al. [ | Benign and malignant nodules from 875 patients. | Training: AUC = | |
| Gong et al. [ | A deep learning–based artificial intelligence system for classifying ground-glass nodule(GGN) into invasive adenocarcinoma (IA) or non-invasive IA. | 828 GGNs of 644 patients (209 are IA and 619 non-IA, including 409 adenocarcinomas in situ and 210 minimally invasive adenocarcinomas). | AUC = |
| Sim et al. [ | Radiologists assisted by deep learning–based CNN. | 600 lung cancer–containing chest radiographs and 200 normal chest radiographs. | Average sensitivity improved from |
| Wang et al. [ | A two-stage deep learning strategy: prior-feature learning followed by adaptive-boost deep learning. | 1357 nodules (765 noninvasive (AAH and AIS) and 592 invasive nodules (MIA and IA)). | Classification accuracy of |
| Xia et al. [ | 1. Recurrent residual CNN based on U-Net, | 373 GGNs from 323 patients. | AUC= |
| Li et al. [ | CLR software based on 3D CNN with DenseNet architecture as a backbone. | 486 consecutive resected lung lesions(320 adenocarcinomas, 40 other malignancies, 55 metastases, and 71 benign lesions). | Classification accuracy for adenocarcinomas, other malignancies, metastases, and benign lesions was |
| Hu et al. [ | 513 GGNs (100 benign, 413 malignant). | Accuracy of | |
| Farahat et al. [ | 1. Three MGRF energies, extracted from three different grades of COVID-19 patients, | 76 CT COVID-19 patients. |