| Literature DB >> 34239268 |
Ji Hyun Kim1, Seung-Joo Nam1, Sung Chul Park2.
Abstract
Recently, studies in many medical fields have reported that image analysis based on artificial intelligence (AI) can be used to analyze structures or features that are difficult to identify with human eyes. To diagnose early gastric cancer, related efforts such as narrow-band imaging technology are on-going. However, diagnosis is often difficult. Therefore, a diagnostic method based on AI for endoscopic imaging was developed and its effectiveness was confirmed in many studies. The gastric cancer diagnostic program based on AI showed relatively high diagnostic accuracy and could differentially diagnose non-neoplastic lesions including benign gastric ulcers and dysplasia. An AI system has also been developed that helps to predict the invasion depth of gastric cancer through endoscopic images and observe the stomach during endoscopy without blind spots. Therefore, if AI is used in the field of endoscopy, it is expected to aid in the diagnosis of gastric neoplasms and determine the application of endoscopic therapy by predicting the invasion depth. ©The Author(s) 2020. Published by Baishideng Publishing Group Inc. All rights reserved.Entities:
Keywords: Artificial intelligence; Convolutional neural network; Diagnosis; Esophagogastroduodenoscopy; Gastric neoplasm; Invasion depth
Mesh:
Year: 2021 PMID: 34239268 PMCID: PMC8240061 DOI: 10.3748/wjg.v27.i24.3543
Source DB: PubMed Journal: World J Gastroenterol ISSN: 1007-9327 Impact factor: 5.742
Figure 1Overview of artificial intelligence, machine learning, and deep learning. Artificial intelligence refers to machines that can do complex tasks like humans by imitating human intelligence. One of the most important ways to achieve artificial intelligence is machine learning. Machine can learn by itself from the data provided to make accurate decisions. Deep learning is an important technique among many methods of machine learning. It is a kind of artificial neural network and learns data through an information input/output layer similar to neurons in the brain.
Figure 2Illustrative model of artificial neural network. Once endoscopic image is selected as input layer, hidden layers are connected to next layer. Through this network, the input image is classified into output layer.
Figure 3Overview of convolutional neural network. It is composed of stacks of convolutional layers, pooling layers, and fully connected layers. Convolutional and pooling layer extract features of input images, while fully connected layers make output based on classification.
Recently published articles on application of artificial intelligence in gastric neoplasms
|
|
|
|
|
|
|
| Hirasawa | Detect EGC | CNN (SSD) | Conventional endoscopy | Training: 13584 images; Test: 2296 images from 69 patients. | Sensitivity 92.2%, PPV 30.6% |
| Ishioka | Real time detection of EGC | CNN (SSD) | Conventional endoscopy | Live video of 62 patients | Accuracy 94.1%, median time 1 s (range: 0-44 s) |
| Sakai | Detect EGC | CNN | Conventional endoscopy | Training: 348943 images; Test: 9650 images | Accuracy 82.8% |
| Kanesaka | Detect EGC | SVM | M-NBI | Training: 126 images; Test: 81 images | Accuracy 96.3%, sensitivity 96.7%, specificity 95% |
| Li | Detect EGC | CNN (Inception-v3) | M-NBI | Training: 2088 images; Test: 341 images | Accuracy 91.2%, sensitivity 90.6%, specificity 90.9% |
| Horiuchi | Classifying EGC from gastritis | CNN (GoogLeNet) | M-NBI | Training: 2570 images; Test: 258 images. | Accuracy 85.3%, sensitivity 95.4%, specificity 71.0%, test speed 51.83 images/s (0.02 s/image) |
| Horiuchi | Detect EGC | CNN (GoogLeNet) | M-NBI | 174 videos | Accuracy 85.1%, AUC 0.8684, sensitivity 87.4%, specificity 82.8%, PPV 83.5%, NPV 86.7% |
| Luo | Real time detection of EGC | GRAIDS | Conventional endoscopy | 1036496 images from 84424 patients | Sensitivity (0.942) similar to the expert (0.945), superior to the competent (0.858) and the trainee (0.722) endoscopist |
| Ikenoyama | Detect EGC | CNN (SSD) | WLI, NBI chromoendoscopy | Training: 13584 images; Test: 2940 images. | Sensitivity 58.4%, specificity 87.3%, PPV 26.0%, NPV 96.5% |
|
| |||||
| Sun | Classify ulcers | DCNN | Conventional endoscopy | 854 images | Accuracy 86.6%, sensitivity 90.8%, specificity 83.5% |
| Lee | Detect EGC and benign ulcer | CNN (ResNet50, Inception-v3, VGG16) | Conventional endoscopy | Training: 717 images; Test: 70 images | AUC 0.95, 0.97, and 0.85 in Inception, ResNet50, and VGG16 |
| Cho | Detect AGC, EGC, dysplasia | CNN (Inception-v4, ResNet152, Inception-Resnet-v2) | Conventional endoscopy | 5217 images from 1469 patients | Gastric cancer: accuracy 81.9%, AUC 0.877; Gastric neoplasm: accuracy 85.5%, AUC 0.927 |
| Kim | Classify gastric mesenchymal tumors | CNN | Endoscopic ultrasonography | Training: 905 images; Test: 212 images. | Accuracy 79.2%, sensitivity 83.0%. specificity 75.5% |
|
| |||||
| Kubota | Predict invasion depth | Back propagation | Conventional endoscopy | Training: 800 images; Test: 90 images | Accuracy 77.9%, 29.1%, 51.0% and 55.3% in T1, T2, T3, and T4 stage; Accuracy 68.9% and 63.6% in T1a and T1b stage |
| Zhu | Predict invasion depth | CNN (ResNet50) | Conventional endoscopy | Training: 790 images; Test: 203 images | AUC 0.94, overall accuracy 89.2%, sensitivity 76.5%, specificity 95.6% |
| Yoon | Detect cancer, and predict invasion depth | CNN (VGG16, Grad-CAM) | Conventional endoscopy | 11539 images | Detection AUC 0.981, depth prediction AUC 0.851 (undifferentiated type histology with a lower accuracy) |
| Cho | Predict invasion depth | CNN (Inception-ResNet-v2, DenseNet-161) | Conventional endoscopy | Training: 2899 images, test: 206 images | Internal validation: accuracy 84.1%, AUC 0.887; External validation: accuracy 77.3%, AUC 0.887 |
| Nagao | Predict invasion depth | CNN (ResNet50) | WLI, NBI, indigo-carmine | 16557 images from 1084 cases of gastric cancer | WLI: AUC 0.9590, sensitivity 89.2%, specificity 98.7%, accuracy 94.4%, PPV 98.3%, NPV 91.7%; NBI: AUC 0.9048; Indigo-carmine: AUC 0.9191 |
|
| |||||
| Wu | Detect blind spot | DCNN | Conventional endoscopy | 34513 images | Accuracy of detecting blind spot: 90.0%; Blind spot rate: 5.9% |
| Wu | Detect EGC and blind spot | DCNN | Conventional endoscopy | 24549 images | Accuracy 92.5%, sensitivity 94.0%, specificity 91.0%, PPV 91.3%, NPV 93.8% |
| Chen | Detect blind spot | DCNN | Conventional endoscopy, U-TOE | Live video of 437 patients | Blind spot rate with AI: Sedated C-EGD, 3.4%; unsedated U-TOE, 21.8%; unsedated C- EGD, 31.2% |
AI: Artificial intelligence; EGC: Early gastric cancer; CNN: Convolutional neural network; SSD: Single Shot MultiBox Detector; PPV: Positive predict value; SVM: Support vector machine; M-NBI: Magnified narrow band imaging; AUC: Area under curve; GRAIDS: Gastrointestinal Artificial Intelligence Diagnostic System; WLI: White light imaging; NBI: Narrow band imaging; NPV: Negative predict value; DCNN: Deep convolutional neural network; VGG: Visual Geometry Group; AGC: Advanced gastric cancer; Grad-CAM: Gradient-weighted class activation mapping; U-TOE: ultrathin transoral endoscopy; C-EGD: conventional esophagogastroduodenoscopy.