| Literature DB >> 32708343 |
Daniela Cornelia Lazăr1, Mihaela Flavia Avram2, Alexandra Corina Faur3, Adrian Goldiş4, Ioan Romoşan1, Sorina Tăban5, Mărioara Cornianu5.
Abstract
In the gastroenterology field, the impact of artificial intelligence was investigated for the purposes of diagnostics, risk stratification of patients, improvement in quality of endoscopic procedures and early detection of neoplastic diseases, implementation of the best treatment strategy, and optimization of patient prognosis. Computer-assisted diagnostic systems to evaluate upper endoscopy images have recently emerged as a supporting tool in endoscopy due to the risks of misdiagnosis related to standard endoscopy and different expertise levels of endoscopists, time-consuming procedures, lack of availability of advanced procedures, increasing workloads, and development of endoscopic mass screening programs. Recent research has tended toward computerized, automatic, and real-time detection of lesions, which are approaches that offer utility in daily practice. Despite promising results, certain studies might overexaggerate the diagnostic accuracy of artificial systems, and several limitations remain to be overcome in the future. Therefore, additional multicenter randomized trials and the development of existent database platforms are needed to certify clinical implementation. This paper presents an overview of the literature and the current knowledge of the usefulness of different types of machine learning systems in the assessment of premalignant and malignant esophageal lesions via conventional and advanced endoscopic procedures. This study makes a presentation of the artificial intelligence terminology and refers also to the most prominent recent research on computer-assisted diagnosis of neoplasia on Barrett's esophagus and early esophageal squamous cell carcinoma, and prediction of invasion depth in esophageal neoplasms. Furthermore, this review highlights the main directions of future doctor-computer collaborations in which machines are expected to improve the quality of medical action and routine clinical workflow, thus reducing the burden on physicians.Entities:
Keywords: Barrett’s esophagus; artificial intelligence; computer-assisted diagnosis; endoscopy; esophageal cancer
Mesh:
Year: 2020 PMID: 32708343 PMCID: PMC7404688 DOI: 10.3390/medicina56070364
Source DB: PubMed Journal: Medicina (Kaunas) ISSN: 1010-660X Impact factor: 2.430
Figure 1Types of machine learning algorithms: supervised learning—task driven (classification); unsupervised learning—data driven (clustering); and reinforcement learning—algorithm learns from trial and error.
Figure 2Convolutional neural network (CNN) system: input layer with raw data of the endoscopic image, multiple layers with the role of extracting specific features, and elaborating image classification in the output layer.
Current studies applying AI in detection of esophageal cancer.
| Ref. | Published Year | Aim of Study | Design of Study | Type of AI (AI Classifier) | AI Validation Methods | Number of Subjects | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Training Dataset | Test Dataset | Performance | ||||||||||||
| No Cases (Negative/Positive) | No Images (Negative/Positive) | Endoscopic Procedure | No Cases (Negative/Positive) | No Images (Negative/ | Endoscopic Procedure | Accuracy % | Sensitivity/Specificity% | AUC | ||||||
| Van der Sommen et al. [ | 2016 | Detection of early neoplasia in BE | R | color filters, specific texture, and ML (“Filter with Gabor bank”, SVM) | leave- | 44 pts with BE (23/21) | WLE | 83 (per image); 86/87 (per patient) | - | |||||
| Mendel et al. [ | 2017 | Detection of early neoplasia in BE | R | CNN | 50/50 EGD images (Endoscopic Vision Challenge MICCAI 2015) | HD-WLE | 94/88 | - | ||||||
| Ebigbo et al. [ | 2019 | Detection of early Barrett AC | R | deep CNN (ResNet) | leave-one-patient-out CV | Local dataset: 41/33 pts, 148 HD WLE/NBI | HD- WLE/NBI | Local dataset: 97/88 (WLE) | - | |||||
| Ghatwary et al. [ | 2019 | Detection of early Barrett AC | R | R-CNN, Fast R-CNN, Faster R-CNN, SSD | 2- and 5-fold-CV, leave-One-Patient-Out | MICCAI dataset:21 pts (9/12) (training dataset) | 60 (30/30) EGD images | HD-WLE | MICCAI dataset: 9 pts (4/5) (validation dataset) | 40 (20/20) EGD images | HD-WLE | 83 (ARR for Faster R-CNN) | 96/92 (SSD) | - |
| Hashi-moto et al. [ | 2020 | Detection of early | R | CNN based on Xception architecture, YOLO v2 | Internal validation | 100 pts (30/70) | 1832 (916/916) EGD images | WLE/NBI | 39 pts (13/26) | 458 (233/225) EGD images | WLE/NBI | 95.4 | 96.4/ | - |
| Vennala-ganti et al. [ | 2017 | Detection of early | P | neural network-based, high-speed computer scan | 160 pts (134 ND/LGD, 26 HGD/EAC) randomized: | WATS | The addition of WATS: absolute detection rate increase 14.4% | |||||||
| Swager et al. [ | 2017 | Detection of early BE neoplasia | R | ML-methods: SVM, discriminant analysis, Ada-Boost, random forest, k-nearest neighbors etc. | Leave-one-out CV | −19 BE pts | Ex vivo VLE images | 90/93 | 0.95 | |||||
| Struy-benberg et al. [ | 2019 | Detection of Barrett’s neoplasia | P | 8 predictive models (e.g., SVM, random forest, Naive Bayes); best = CAD multi-frame image | leave-one-out CV | −52 endoscopic resection specimens from 29 BE pts | Ex vivo VLE images | - | - | 0.94 | ||||
| Seghal et al. [ | 2018 | Detection of dysplasia arising in BE | P | ML-algorithm: DT (WEKA package) | −40 pts BE ± dysplasia | Video HD-EGD, i-Scan | 92 | 97/88 | - | |||||
| Ebigbo et al. [ | 2020 | Real- time detection of early neoplasia in BE | R/P | DeepLab V.3+, an encoder–decoder | classification (global prediction), segmentation (dense prediction) | 129 EGD images | HD-WLE/ | 14 pts BE (valida-tion dataset) | 26/36 images | random images | 89.9 | 83.7/ | - | |
| De Groof et al. [ | 2019 | Recognition of Barrett’s neoplasia | P | supervised ML-models (trained on color/texture features), SVM | leave-one-out CV | −60 pts (20/40) | HD-WLE | 92 | 95/85 | 0.92 | ||||
| Guo et al. [ | 2020 | Real-time automated | R/P | DL model: SegNet = deep encoder–decoder architecture for | AI probability heat map-generated for each input (ESD image) | 358/191 pts | 6473 (3703/2770) images | NBI images | Validation: 59 consecutive cc cases (dataset A); 2004 consecutive non-cc cases (dataset B); 27 non-ME cc cases + 20 ME cc cases (dataset C); 33 normal cases (dataset D) | Validation: 1480 cc images (dataset A); 5191 non-cc images (dataset B); 27 non-ME cc images + 20 ME cc images (dataset C); 33 normal images (dataset D) | NBI images (datasets A, B); | - | 98.04/ | 0.989 (data-sets A, B) |
| Ohmori et al. [ | 2020 | Detect and differentiate esophageal SCC | R | deep Neural Network-SSD | Caffe deep learning framework | 804 SSC pts | 9591 non-ME/7844 ME, SCC images; | ME/non-ME ESD images | 135 pts | 255 non-ME WLE; 268 non-ME, NBI/BLI; 204 ME-NBI/ | non-ME WLE; non-ME/ME NBI, BLI | 83 | 98/68 | - |
| Tokai et al. [ | 2020 | Diagnostic ability of AI to measure ESCC invasion depth | R | deep neural network-SSD, GoogLeNet | Caffe deep learning framework | -pre-training 8428 images; training 1751 EGD images | WLE/NBI images | 55 consecu-tive patients, 42 with EP-SM1 ESCC and 13 with SM2 ESCC | 291 images | WLE/NBI images | 95.5 (SCC diagnosis); | 84.1 (invasion depth) | - | |
| Zhao et al. [ | 2019 | Automated classification | P | double-labelling FCN, self-transfer learning | VGG16 net architecture, 3-fold CV | −219 pts (30 inflammation, 24 LGD, 165 ESCC) | ME-NBI images | 89.2 (lesion level) | 87.0/ | - | ||||
| Everson et al. [ | 2019 | Real-time classification of IPCL patterns in the diagnosis of ESCC | P | CNN, eCAMs (discriminative areas normal/abnormal) | five-fold CV | −17 pts (7 normal 10 ESCC) | ME-NBI images (Video EGD) | 93.7 normal/abnormal IPCL | 89.3/ | - | ||||
| García-Peraza-Herrera et al. [ | 2020 | Classify still | P | CNN architecture for the binary classification task (explainability) ResNet18CAM-DS | −114 pts (45/69) | ME-NBI video | 91.7 | 93.7/ | - | |||||
| Koda-shima et al. [ | 2007 | Discrimination normal/malignant | P, ex | ImageJ program | −10 pts | Endocytoscopy | Difference in the mean ratio of total nuclei: | |||||||
| Shin et al. [ | 2015 | Diagnosis of | P | Linear | −177 pts | Laptop-interfaced HRME | 87/ | - | ||||||
| Quang et al. [ | 2016 | Diagnosis of esophageal SCC | R | Linear | Data identical as for [124] | Tablet-interfaced HRME | 95/ | - | ||||||
| Kumagai et al. [ | 2019 | Diagnosing ESCC based on ECS images (optical biopsy) | R/P | CNN based on GoogLeNet, 22 layers-backpropagation | Cafe deep learning framework | 240 pts (114/126) → 308 ECS | 4715 (3574/1141) images | ECS images | 55 consecutive pts (28/27) | 1520 images | ECS images | 90.9 | 92.6/ | 0.85; 0.90 (HMP) |
| Horie et al. [ | 2019 | Detection of esophageal cancer (SCC and AC) | R | deep CNN-SSD | Caffe deep learning | 384 pts esophageal cc (397 lesions ESCC, 32 lesions EAC) | 8428 images esophageal cc | WLE/NBI images | 50/47 pts (49 lesions−41 ESCC,8 EAC) | 1118 images | WLE/NBI images | 98 (superficial/advanced cc) 99 for ESCC,90 for EAC | 98 | - |
| Luo et al. | 2019 | AI for the diagnosis of upper | R/P | GRAIDS: DL semantic segmentation model (encoder-decoder DeepLab’s V3 + algorithm) | internal validation, external validation (5 hospitals), prospective validation | −1,036,496 endoscopy images from 84,424 individuals used to develop and test GRAIDS | HD-WLE EGD | 95.5 (internal validation set); 92.7 (prospective set); 91.5–97.7 (5 external validation sets) | 94.2/92.3 (prospec-tive set) | 0.966–0.990 (five external valida-tion datasets) | ||||
EGD—esophagogastroduodenoscopy; AI—artificial intelligence; R—retrospective; P—prospective; WLE—white-light endoscopy; NBI—narrow-band imaging; HD—high definition; ME—magnifying endoscopy; VLE—volumetric laser endomicroscopy; WATS—wide-area transepithelial sampling; BLI—blue laser endoscopy; ECS—endocytoscopic system; CV—cross-validation; SVM—support vector machine; ANN—artificial neural network; CNN—convolutional neural network; R-CNN—regional-based convolutional neural network; SSD—Single-Shot MultiBox Detector; FCN—fully convolutional network; DT—decision tree; ARR—average recall rate; cc—cancerous; ND—nondysplastic; LGD—low-grade dysplasia; HGD—high-grade dysplasia, EAC—early adenocarcinoma; ESCC—early squamous cell carcinoma; IPCL—intrapapillary capillary loop; eCAMs—explicit class activation maps; HRME—high-resolution microscopic endoscopy; HMP—higher-magnification picture; LMP—lower-magnification picture.
Clinical trials using AI for diagnosing early neoplasia in Barrett’s esophagus and esophageal carcinoma.
| Status | Study Title | Number ID/Acronym | Study Type | Conditions | Design/Interventions | Outcomes | Target Sample Size (No. Participants) | Region |
|---|---|---|---|---|---|---|---|---|
| Recruiting | The analysis of WATS3D increased yield of Barrett’s esophagus and esophageal dysplasia | NCT03008980 | Observational |
GERD Barrett esophagus Esophageal dysplasia Esophagus adenocarcinoma | Diagnostic test: patients will perform routine care EGD with WATS3D brush samples and forceps biopsies; collection of cytology/pathology results | 75,000 | US | |
| Recruiting | Volumetric laser endomicroscopy with intelligent real-time image segmentation (IRIS) | NCT03814824 | Interventional |
Barrett’s esophagus with/without dysplasia Barrett’s esophagus with low/high grade dysplasia | Diagnostic test: IRIS | 200 | US | |
| Completed | A comparison of Volumetric Laser Endomicroscopy and endoscopic mucosal resection in patients with Barrett’s dysplasia or intramucosal adenocarcinoma | NCT01862666 | Observational |
Barrett’s-associated dysplasia Intramucosal adenocarcinoma CAD image analysis | To evaluate the ability of physicians to use VLE to visualize HGIN/IMC in both the ex-vivo and in-vivo setting and correlate those images to standard histology of EMR specimens as the gold standard. | 30 | The Netherlands | |
| Preinitiation | The additional effect of AI support system to detect esophageal cancer-exploratory randomized control trial | UMIN 000039924/AIDEC | Interventional |
Esophageal neoplasm AI | To investigate the efficacy of AI for the diagnosis of esophageal cancer | 300 | Japan | |
| Recruiting | Automatic diagnosis of early esophageal squamous neoplasia using pCLE with AI | NCT04136236 | Observational |
Esophageal neoplasm AI Confocal laser endomicroscopy | Diagnosis test: the diagnosis of AI and endoscopist | 60 | China | |
| Recruiting | Research on development of AI for detection and classification of upper gastrointestinal cancers in endoscopic images | UMIN000039597 | Observational |
Esophageal neoplasm AI | Collection of endoscopic images of upper GI cancer, development of an AI system for detection of upper GI cancer- assessment of an AI system performance by expert endoscopists | 200 | Japan | |
| Completed | AI for early diagnosis of esophageal squamous cell carcinoma during optical enhancement magnifying endoscopy | NCT03759756 | Observational |
AI Optical enhancement endoscopy Magnifying endoscopy | Arm group label: AI visible/invisible group. | 119 | China |
GI—gastrointestinal; AI—artificial intelligence; GERD - gastroesophageal reflux disease; EGD—esophagogastroduodenoscopy; pCLE—probe-based confocal laser endomicroscopy; VLE—volumetric laser endomicroscopy; WATS3D—wide-area transepithelial sampling associated with computer-assisted three-dimensional analysis; IRIS—intelligent real-time image segmentation; EMR—endoscopic mucosal resection; HGIN—high-grade intraepithelial neoplasia; IMC—intramucosal adenocarcinoma; CAD—computer-assisted diagnosis.