| Literature DB >> 32637935 |
Shraddha Gulati1, Andrew Emmanuel1, Mehul Patel1, Sophie Williams1, Amyn Haji1, Bu'Hussain Hayee1, Helmut Neumann2.
Abstract
Artificial intelligence is a strong focus of interest for global health development. Diagnostic endoscopy is an attractive substrate for artificial intelligence with a real potential to improve patient care through standardisation of endoscopic diagnosis and to serve as an adjunct to enhanced imaging diagnosis. The possibility to amass large data to refine algorithms makes adoption of artificial intelligence into global practice a potential reality. Initial studies in luminal endoscopy involve machine learning and are retrospective. Improvement in diagnostic performance is appreciable through the adoption of deep learning. Research foci in the upper gastrointestinal tract include the diagnosis of neoplasia, including Barrett's, squamous cell and gastric where prospective and real-time artificial intelligence studies have been completed demonstrating a benefit of artificial intelligence-augmented endoscopy. Deep learning applied to small bowel capsule endoscopy also appears to enhance pathology detection and reduce capsule reading time. Prospective evaluation including the first randomised trial has been performed in the colon, demonstrating improved polyp and adenoma detection rates; however, these appear to be relevant to small polyps. There are potential additional roles of artificial intelligence relevant to improving the quality of endoscopic examinations, training and triaging of referrals. Further large-scale, multicentre and cross-platform validation studies are required for the robust incorporation of artificial intelligence-augmented diagnostic luminal endoscopy into our routine clinical practice.Entities:
Keywords: AI; endoscopy; imaging
Year: 2020 PMID: 32637935 PMCID: PMC7315657 DOI: 10.1177/2631774520935220
Source DB: PubMed Journal: Ther Adv Gastrointest Endosc ISSN: 2631-7745
Figure 1.Schematic diagram of a CNN. CNNs differ from traditional fully connected networks in that each perceptron connects to a few neurons instead of all neurons. With each hidden layer of the CNN there will be an input value multiplied by the weight and added to the biases. This value will then be passed through an activator function (ReLU); if the value is above the threshold, it will fire. The output of this layer will then become the input of the next hidden layer and follow the same formula. The final layer is fully connected and is where image classification ensues with final activation by pixel values from the pooling layer exceeding the threshold: a high value will correctly identify the image, a low value will not.
CNN, convolutional neural network; ReLU, rectified linear activation unit.
Diagnostic test summaries for artificial intelligence–augmented endoscopy of the upper gastrointestinal tract.
| Author | P/R | AI model/imaging modality | Test dataset (no. of images) | Sensitivity (%) | Specificity (%) | Accuracy (%)/AUC |
|---|---|---|---|---|---|---|
|
| ||||||
| Shichijo and colleagues[ | R | DL WLE | 32,208 | 81.9 | 83.4 | 83.1 |
| DL WLE | 11,481 | 85.2 | 89.3 | 88.6 | ||
| Itoh and colleagues[ | R | DL WLE | 30 | 86.7 | 86.7 | 0.956 |
| Huang and colleagues[ | R | DL WLE | 74 patients | 72.7–84.8 | 85.4–95.1 | 85.1 |
| Yasuda and colleagues[ | R | DL LCI | 525 | 90.4 | 85.7 | 87.6 |
| Nakashima and colleagues[ | R | DL WLE | 60 | 66.7 | 60.0 | 0.66 |
| DL BLI | 60 | 96.7 | 86.7 | 0.96 | ||
| DL LCI | 60 | 96.7 | 83.3 | 0.95 | ||
| Barrett’s neoplasia | ||||||
| Ebigbo and colleagues[ | P | DL WLE | 62 | 83.7 | 100 | 89.9 |
| Qi and colleagues[ | R | CAD WLE | 314 | 82 | 74 | 83 |
| Sommen and colleagues[ | R | CAD WLE | 100 | 86 | 87 | |
| Shin and colleagues[ | R | CAD HRME | 153 | 88 | 85 | 85 |
| Swager and colleagues[ | R | CAD VLE | 40 | 90 | 93 | 0.95 |
| Sehgal and colleagues[ | R | CAD (DT) WLE/ACA | 40 videos | 97 | 88 | 92 |
| Squamous cell neoplasia | ||||||
| Tang and colleagues[ | R | DL WLE | 946 | 97 | 94 | 97 |
| Shiroma and colleagues[ | R | DL WLE/NBI | 40 videos | 80 | 63.3 | 67.5 |
| Zhao and colleagues[ | R | DL Mag NBI | 1383 | 83 | 95.7 | 89.2 |
| Horie and colleagues[ | R | DL WLE | 1118 | 98 | 98 | |
| Guo and colleagues[ | P | DL NBI | 6671 (1480 neoplastic images) | 91.5 | 99.9 | 0.989 |
| Quang and colleagues[ | R | ML HRME | 167 | 95 | 91 | O.937 |
| Kumagi and colleagues[ | R | Endocytoscopy | 1520 (55 patients) | 92.6 | 89.3 | 90.9 |
| Tokai and colleagues[ | R | DL WLE | 293 | 87.7 | 72.5 | 77.8 |
| DL NBI | 40 | 92.3 | 77.8 | |||
| Gastric Neoplasia | ||||||
| Hirasawa T and colleagues[ | R | DL WLE | 2296 (714 cancers) | 92.2 | NR | NR |
| Ishioka M and colleagues[ | R | DL WLE | 68 videos | 94.1 | NR | NR |
| Luo and colleagues[ | P | DL WLE | 1,036,496 | 90.7–98.2 | 91.3–97.9 | 0.91–0.97 |
| Miyaki and colleagues[ | R | CAD FICE | 46 cancers | 84.8 | 87.0 | 85.9 |
| Miyaki and colleagues[ | R | CAD BLI | 100 | NR | NR | SVM value 0.846 ± 0.22 |
| Kanesaka and colleagues[ | R | CAD mNBI | 81 images (61 cancers) | 96.7 | 95 | 96.3 |
| Li and colleagues[ | R | DL NBI | 341 (170) | 91.1 | 90.6 | 90.9 |
| Horiuchi and colleagues[ | R | DL mNBI | 258 images (151 cancers) | 95.4 | 71.0 | 85.3 |
| Yoon and colleagues[ | R | DL WLE | 660 images (330 cancers) | 91 | 97.6 | 0.981 |
| Kubota and colleagues[ | R | DL WLE | 90 | NR | NR | 64.7 |
| Zhu and colleagues[ | R | DL WLE | 203 | 76.47 | 95.56 | 0.94 |
P, prospective; R, retrospective; ACA, acetic acid; AI, artificial intelligence; AUC, area under the curve; BLI, blue laser imaging; CAD, computer-assisted diagnosis; DL, deep learning; FICE, Fujinon intelligent chromoendoscopy; HRME, high-resolution magnification endoscopy; LCI, linked colour imaging; mNBI, magnification narrow band imaging; NBI, narrow band imaging; WLE, white light endoscopy.
Summary of randomised trials with adenoma detection rate (ADR) as the primary outcome.
| Author | Centres | Randomisation | CADe model | Comparator | No. of patients | ADR AI (%) | ADR CC (%) | OR (95% CI) |
|---|---|---|---|---|---|---|---|---|
| Wang and colleagues[ | 1 | 1:1 | DCNN | CC (WLE) | 1058 | 29.1 | 20.3 | 1.61 (1.213–2.135) |
| CADe 522 | ||||||||
| CC 536 | ||||||||
| Wang and colleagues[ | 1 | 1:1 | DCNN | CC (WLE) | 1010 | 34 | 28 | 1.36 (1.03–1.79 |
| CADe 508 | ||||||||
| CC 502 | ||||||||
| Gong and colleagues[ | 1 | 1:1 | DCNN | CC (WLE) | 704 | 16 | 8 | 2.30 (1.40–3.77) |
| CADe 355 | ||||||||
| CC 349 | ||||||||
| Su and colleagues[ | 1 | 1:1 | DCNN | CC (WLE) | 659 | 28.9 | 16.5 | 2.055 (1.397–3.024) |
| CADe 308 | ||||||||
| CC 315 | ||||||||
| Liu and colleagues[ | 1 | 1:1 | DCNN | CC (WLE) | 1026 | 39 | 23 | 1.64 (1.201–2.220) |
| CADe 508 | ||||||||
| CC 518 |
AI, artificial intelligence; CADe, computer-assisted detection; CC, conventional colonoscopy; CI, confidence interval; DCNN, deep convolutional neural network; OR, odds ratio; WLE, white light endoscopy.