| Literature DB >> 35884593 |
Nishant Thakur1, Mohammad Rizwan Alam1, Jamshid Abdul-Ghafar1, Yosep Chong1.
Abstract
State-of-the-art artificial intelligence (AI) has recently gained considerable interest in the healthcare sector and has provided solutions to problems through automated diagnosis. Cytological examination is a crucial step in the initial diagnosis of cancer, although it shows limited diagnostic efficacy. Recently, AI applications in the processing of cytopathological images have shown promising results despite the elementary level of the technology. Here, we performed a systematic review with a quantitative analysis of recent AI applications in non-gynecological (non-GYN) cancer cytology to understand the current technical status. We searched the major online databases, including MEDLINE, Cochrane Library, and EMBASE, for relevant English articles published from January 2010 to January 2021. The searched query terms were: "artificial intelligence", "image processing", "deep learning", "cytopathology", and "fine-needle aspiration cytology." Out of 17,000 studies, only 26 studies (26 models) were included in the full-text review, whereas 13 studies were included for quantitative analysis. There were eight classes of AI models treated of according to target organs: thyroid (n = 11, 39%), urinary bladder (n = 6, 21%), lung (n = 4, 14%), breast (n = 2, 7%), pleural effusion (n = 2, 7%), ovary (n = 1, 4%), pancreas (n = 1, 4%), and prostate (n = 1, 4). Most of the studies focused on classification and segmentation tasks. Although most of the studies showed impressive results, the sizes of the training and validation datasets were limited. Overall, AI is also promising for non-GYN cancer cytopathology analysis, such as pathology or gynecological cytology. However, the lack of well-annotated, large-scale datasets with Z-stacking and external cross-validation was the major limitation found across all studies. Future studies with larger datasets with high-quality annotations and external validation are required.Entities:
Keywords: artificial intelligence; cancer; cytopathology; deep learning; systematic review
Year: 2022 PMID: 35884593 PMCID: PMC9316753 DOI: 10.3390/cancers14143529
Source DB: PubMed Journal: Cancers (Basel) ISSN: 2072-6694 Impact factor: 6.575
Figure 1PRISMA flow chart showing the study selection process.
Characteristics of the AI models according to organ types using cytological image analysis.
| No. | Organ | Author | Year | Country | Task | Staining and Preparation Method | Dataset | Pixel Level | Sampling | Z-Stacking Images | External Cross-Validation | Base Model | Performance | Pathologist |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | Thyroid | Varlatzidou [ | 2011 | Greece | Classification | Pap | 335 patients | 1024 × 768 | FNAC | ND | ND | ANN | Sens: 93.80% | NA |
| 2 | Gopinath (1) | 2013 | India | Nuclear segmentation/ | Pap | 110 patches | 256 × 256 | FNAC | ND | ND | SVM/ | Sens: 95% | ATLAS committee | |
| 3 | Gopinath (2) | 2013 | India | Nuclear segmentation/ | Pap | 110 patches | 256 × 256 | FNAC | ND | ND | SVM/ | Sens: 90% | ATLAS committee | |
| 4 | Gopinath (3) | 2015 | India | Nuclear segmentation/ | Pap | 110 patches | 256 × 256 | FNAC | ND | ND | SVM/ | Sens: 100% | ATLAS committee | |
| 5 | Savala | 2017 | India | Classification | May | 57 cases | NA | FNAC | ND | ND | ANN | Acc: 100% | 2 | |
| 6 | Gopinath (4) | 2018 | India | Classification | Pap | 110 patches | 256 × 256 | FNAC | ND | ND | ANN/ | Sens: 95% | ATLAS committee | |
| 7 | Sanyal | 2018 | India | Classification | Pap | 370 patches | 512 × 512 | FNAC | ND | ND | CNN | Sens: 90.48% | NA | |
| 8 | Dov | 2019 | USA | Classification | Pap | 908 WSIs | 150,000 × | FNAC | ND | ND | CNN | Sens: 92% | 3 | |
| 9 | Guan | 2019 | China | Classification | LBC H&E | 279 WSI | 224 × 224 | FNAC | ND | ND | VGG-16/ | Sens 100% | 1 | |
| 10 | Range | 2020 | USA | Classification | Pap | 659 patients | NA | FNAC | Yes | ND | Machine learning and CNNs | Sens: 92.0% | 1 | |
| 11 | Frago-poulos | 2020 | Greece | Classification | LBC Pap-stained | 447 WSI | 1024 × 768 | FNAC | ND | ND | ANN (RBF) | Sens: 95.0%, | NA | |
| 12 | Urinary bladder | Murali-daran | 2015 | India | Classification | Pap | 115 cases | NA | Urine sample | ND | ND | ANN | (1) All benign and malignant cases were diagnosed correctly | 2 |
| 13 | Sanghvi | 2019 | USA | Classification | NA | 2405 WSIs | 150 × 150 | Urine sample | Yes | ND | CNN | Sens: 79.5% | 4 | |
| 14 | Vaickus | 2018 | USA | Segmentation | NA | 217 WSIs | 40,000 × 40,000 | Urine sample | ND | ND | CNN | Acc: >95% | 2 | |
| 15 | Zhang | 2020 | China | Classification | Pap | 49 cases | NA | Urine sample | ND | ND | CNN | Identified abnormal | 1 | |
| 16 | Awan | 2021 | UK | Segmentation, | LBC Pap | 398 WSIs | 256 × 256 | Urine sample | ND | ND | RetinaNet | AUC | NA | |
| 17 | Nojima | 2021 | India | Classification | LBC Pap | 232 cases | 256 × 256 | Urine sample | ND | ND | VGG16 | AUC: 0.98, F1 score: 0.90 | NA | |
| 18 | Lungs | Teramoto (1) | 2017 | Japan | Classification | Pap | 76 cases | 256 × 256 | FNAC /Bronchoscopy | ND | ND | CNN | Acc: 71.1% | NA |
| 19 | Teramoto (2) | 2019 | Japan | Classification | Pap | 46 cases | 224 × 224 | FNAC /Bronchoscopy | ND | ND | CNN | Sens: 89.3% | NA | |
| 20 | Teramoto (3) | 2020 | Japan | Classification | Pap | 60 cases | 256 × 256 | FNAC /Bronchoscopy | ND | ND | CNN/ | Sens: 85.4% | NA | |
| 21 | Gonzalez | 2020 | USA | Classification | Diff-Quik/ | 40 cases | 299 × 299 | FNAC /Bronchoscopy | ND | ND | Inception V3 | For Diff-Quik Model | NA | |
| 22 | Breast | Dey | 2011 | India | Classification | H&E | 64 cases | NA | FNAC | ND | ND | ANN | ANN classified all the FA and ILC cases and six out of seven IDC cases | 2 |
| 23 | Subbaiah | 2013 | India | Classification | H&E | 112 cases | NA | FNAC | ND | ND | ANN | Sens: 100% | 2 | |
| 24 | Pleural effusions | Barwad | 2011 | India | Classification | Giemsa/ | 114 cases | NA | Pleural fluid | ND | ND | ANN | Acc: 100% | 2 |
| 25 | Tosun | 2015 | USA | Nuclear segmentation/ | Diff-Quik | 34 cases | NA | Pleural fluid | ND | ND | OTBL/ | Acc: 100% | 1 | |
| 26 | Ovary | Wu | 2018 | China | Classification | H&E | 85 WSIs | 227 × 227 | FNAC | ND | ND | CNN | Acc: 78.20% | 2 |
| 27 | Pancreas | Boroujeni | 2017 | USA | Nuclear segmentation/ | Pap | 75 cases | NA | FNAC | ND | ND | K-means clustering/MNN | Acc: 100% | NA |
| 28 | Prostate | Nguyen | 2012 | USA | Nuclear segmentation/ | H&E | 17 WSIs | Training | NA | ND | ND | SVM/RBF kernel | Sens: 78% | NA |
Abbreviations: FNAC: Fine-needle aspiration cytology, ND: Not done, ANNs: Artificial neural networks, LVQ: Learning vector quantizer, Sens: Sensitivity, Spec: Specificity, Acc: Accuracy, PTC: Papillary thyroid carcinoma, FA: Follicular adenoma, NA: Not available, FC: Follicular carcinoma, GAN: Generative adversarial network, SVM: Support vector machines, ENN: Elman neural network, k-NN: k-nearest neighbor, DT: Decision tree, LBC: Liquid-based cytology, RBF: Radial basis function, AU: Atypical urothelial cells, HGUC: High-grade urothelial carcinoma, LGUN: Low-grade urothelial neoplasm, SHGUC: Suspicious for high-grade urothelial carcinoma, BU: Benign urothelial cells, SqC: Squamous cells, Cry: Crystals, Ery: Erythrocytes, Leu: Leukocytes, BI: Blurry images, Deb: Debris, DC: Degenerated cells; UC: Urothelial cells, IC: Inflammatory cells, WSI: Whole-slide images, AUC: Area under the curve, H&E: Hematoxylin and eosin, OTBL: Optimal transport-based linear, CART: Classification and regression tree, SCLC: Small cell lung carcinoma, LCNEC: Large cell neuroendocrine carcinoma; AdC: Adenocarcinoma, SqCC: Squamous cell carcinoma, MNN: Multilayer perceptron neural network, FAd: Fibroadenomas, IDC; Infiltrating ductal carcinomas, ILC: Infiltrating lobular carcinoma, CNN: Convolutional neural network, SC: Serous carcinoma, MC: Mucinous carcinoma, EC: Endometrioid, CCC: Clear cell carcinoma.
Figure 2Number of publications of AI models on cytopathological image analysis by country.
Figure 3Classification of artificial intelligence models for cytopathological image analysis according to target organ from 2010 to 2020 (A) along with yearly trends (B).
Figure 4Comparison of sensitivity, specificity (A), and accuracy (B) of AI classification models in thyroid fine-needle aspiration cytology. The models by Gopinath (3) [36], and Guan et al. [40] showed higher sensitivity; the model by Gopinath (1) [34], Gopinath (2) [35], and Gopinath (4) [49] et al. showed higher specificity; and the model by Savala et al. [37] showed higher accuracy. * Indicates that the dataset was in nuclei form instead of patches.
Figure 5Comparison of sensitivity, specificity (A), and accuracy of models (B) with respect to lung cytology samples. The model by Gonzalez et al. [28] showed the best sensitivity and specificity, while the models by Teramoto et al. [23] showed the best accuracy.
Figure 6Example of cytological classification for ovarian cancer. The architecture and illustration of CNN for ovarian cancer image classification. (Source: Figure 4 in Wu et al. [29]. Reprinted with permission from the authors. Copyright (2019), Bioscience Reports, Portland Press [29].).
Figure 7Examples of cytological images. (A) Conventional smear from uterine cervix of a 30-year-old female showing high-grade squamous intraepithelial lesion (HSIL) (Pap stain, ×20). (B) Conventional smear from a salivary gland mass of a 30-year-old female patient, showing intraductal carcinoma (H&E stain, ×20). (C) High-power view of a conventional smear from a 64-year-old male patient with a submandibular gland nodule, showing adenoid cystic carcinoma (Diff-Quik stain, ×40). (D) Ascitic fluid liquid-based preparation from a 79-year-old female patient with a history of colon carcinoma showing metastatic colon carcinoma (Pap stain, ×20).
Figure 8The application of AI models using cytological images applied for tumor classification, prognosis prediction, and nuclear segmentation. Published studies are denoted by serial numbers in Table 1. The studies are stratified according to the level of supporting evidence (outer circle, internally validated; middle circle, externally validated; inner circle, FDA-approved). * Indicates the dataset was in nuclei form instead of patches.
Figure 9Example of stromal and nuclear grading using cytological images. Gradient-weighted class activation mapping (Grad-CAM) is illustrated. (A) Representative cytology images were observed using Grad-CAM and corresponding hematoxylin and eosin (H&E) histology images of invasive or noninvasive urothelial carcinoma (UC). Cytology images with a true-positive diagnosis contained cells with shrunken nuclei, nuclei with color irregularities, or neutrophil infiltration. (B) Representative cytology images were observed using Grad-CAM and corresponding H&E histology images of high-grade or low-grade UC. Cytology images with a true-positive diagnosis contained nuclei with coarse chromatin or an obvious nucleolus. (C) Pie charts indicate the proportions of findings associated with a true-positive diagnosis of stromal invasion or nuclear grading in the corresponding histology image. (Source: Figure 4 in Nojima et al. Reprinted with permission from the authors. Copyright (2021), Cancer Cytopathology, ACS journal [47].)
Figure 10Example of cytological classification of lung cancer. (Top panel) Diagram showing the workflow for annotating and exporting image tiles using QuPath for subsequent input into the convolutional neural network. (Bottom panel) Schematic of the convolutional neural network architecture based on Google Inception V3 (adapted from https://github.com/tensorflow/models/tree/masterr/research/inception). (Source: Figure 2 in Gonzalez et al. Reprinted with permission from the authors. Copyright (2019), Cytopathology published by Wiley [28].)
Figure 11Representative figures of digital slides from repository projects: (A) BIG PICTURE, (B) Path LAKE, (C) JP-AID, and (D) NIA Data Dam.