| Literature DB >> 35521666 |
Mohamed Hussein1,2,3, Juana González-Bueno Puyal2,4, David Lines4, Vinay Sehgal3, Daniel Toth4, Omer F Ahmad2,3, Rawen Kader2, Martin Everson1, Gideon Lipman1, Jacobo Ortiz Fernandez-Sordo5, Krish Ragunath5, Jose Miguel Esteban6, Raf Bisschops7, Matthew Banks3, Michael Haefner8, Peter Mountney4, Danail Stoyanov2, Laurence B Lovat1,2,3, Rehan Haidry1,2,3.
Abstract
BACKGROUND AND AIMS: Seattle protocol biopsies for Barrett's Esophagus (BE) surveillance are labour intensive with low compliance. Dysplasia detection rates vary, leading to missed lesions. This can potentially be offset with computer aided detection. We have developed convolutional neural networks (CNNs) to identify areas of dysplasia and where to target biopsy.Entities:
Keywords: AI; Barrett's Esophagus; CAD; CNN; artificial intelligence; computer aided detection; convolutional neural networks; early detection; early neoplasia; neoplasia
Mesh:
Year: 2022 PMID: 35521666 PMCID: PMC9278593 DOI: 10.1002/ueg2.12233
Source DB: PubMed Journal: United European Gastroenterol J ISSN: 2050-6406 Impact factor: 6.866
FIGURE 1Breakdown of the data set in the classification/segmentation models and the potential importance of each model output in the computer aided detection (CAD) system. CAD; Computer aided detection, *In one patient, the video segment of esophagus was split into two segments: dysplastic and NDBE. The former was used for training and the latter for testing
FIGURE 2AUC performance of the classifier algorithm on iscan‐1 (a) and unenhanced white light (WL) (b)
Performance metrics of the classifier model on iscan‐1 and unenhanced white light (WL) imaging in the test data set
| Tested on | AUC | Sens | Spec | Accuracy | Number of dysplastic patients | Number of NDBE patients | No. of dysplastic images | No. of NDBE images |
|---|---|---|---|---|---|---|---|---|
|
| 93% | 91% | 79% | 86% | 28 | 16 | 168 | 96 |
| Unenhanced WL | 83% | 92% | 73% | 83% | 18 | 14 | 108 | 84 |
Abbreviations: AUC; area under the receiver operator curve, WL; White light, Sens; Sensitivity, Spec; Specificity.
FIGURE 3Heat map outputs from the classifier model trained on video frame segments without delineations using an indirectly supervised approach. (a) Original image, (b) expert delineation, (c) heat map generated by the classifier. On the heat maps, the pixels are coloured based on their dysplastic content according to the model. Red areas (closer to 1), show the most likely dysplastic pixels and therefore optimal area for a targeted biopsy
Targeted biopsy predictions generated by the CNN
| Mean number of biopsy predictions by the CNN | Maximum number of biopsy predictions by the CNN | Proportion of biopsies within expert 1 delineation | Proportion of biopsies within expert 2 delineation | Proportion of biopsies within intersection of delineations | Proportion of biopsies within union of delineations | |
|---|---|---|---|---|---|---|
| Scenario 1 (1 max pix value) | 1 | 1 | 81% | 74% | 71% | 85% |
| Scenario 2 (1 max pix value + 1 geometric centre) | 2 | 2 | 88% | 81% | 78% | 91% |
| Scenario 3 (1 max pix value + up to 2 geometric centres) | 3 | 3 | 91% | 85% | 81% | 93% |
| Scenario 4 (1 max pix value + all geometric centres) | 3 | 5 | 94% | 90% | 86% | 97% |
Note: Different scenarios were generated assessing where the targeted biopsies fall within the gold standard expert delineations which matched areas of histologically confirmed dysplasia in videos. Based on expert consensus scenario 2 was considered the most clinically relevant and user friendly at the same time.
FIGURE 4Images with BE dysplasia (a) and targeted biopsy and delineation predictions relative to the expert ground truth (b) by the Artificial intelligence (AI) system. Delineations (Green and purple outline) = 2 different expert delineations. Blue shaded delineation = delineation prediction by the convolutional neural network (CNN). Orange and red dot = point of interest/targeted biopsy predicitons by the AI system based on scenario 2 (Table 2)
Comparison of the performance of the Artificial intelligence (AI) system versus 6 non expert endoscopists
| Per image sensitivity | Per image specificity | Mean | |
|---|---|---|---|
| Endoscopist 1 | 23/28 (82%) | 19/33 (58%) | |
| Endoscopist 2 | 21/28 (75%) | 12/33 (36%) | Sensitivity = 79% |
| Endoscopist 3 | 16/28 (57%) | 22/33 (67%) | |
| Endoscopist 4 | 24/28 (86%) | 22/33 (67%) | Specificity = 49% |
| Endoscopist 5 | 22/28 (79%) | 15/33 (46%) | |
| Endoscopist 6 | 26/28 (93%) | 7/33 (21%) | |
| AI system | 27/28 (96%) | 29/33 (88%) |