| Literature DB >> 29066576 |
Michael F Byrne1, Nicolas Chapados2,3, Florian Soudan2, Clemens Oertel2, Milagros Linares Pérez4, Raymond Kelly5, Nadeem Iqbal6, Florent Chandelier2, Douglas K Rex7.
Abstract
BACKGROUND: In general, academic but not community endoscopists have demonstrated adequate endoscopic differentiation accuracy to make the 'resect and discard' paradigm for diminutive colorectal polyps workable. Computer analysis of video could potentially eliminate the obstacle of interobserver variability in endoscopic polyp interpretation and enable widespread acceptance of 'resect and discard'. STUDY DESIGN AND METHODS: We developed an artificial intelligence (AI) model for real-time assessment of endoscopic video images of colorectal polyps. A deep convolutional neural network model was used. Only narrow band imaging video frames were used, split equally between relevant multiclasses. Unaltered videos from routine exams not specifically designed or adapted for AI classification were used to train and validate the model. The model was tested on a separate series of 125 videos of consecutively encountered diminutive polyps that were proven to be adenomas or hyperplastic polyps.Entities:
Keywords: colorectal adenomas; endoscopic polypectomy; polyp
Mesh:
Year: 2017 PMID: 29066576 PMCID: PMC6839831 DOI: 10.1136/gutjnl-2017-314547
Source DB: PubMed Journal: Gut ISSN: 0017-5749 Impact factor: 23.059
Figure 1Schematic of the deep convolutional neural network model used.
Figure 2Schematic of the data preparation and training procedure of the deep convolutional neural network (DCNN) frame classifier. Raw videos are curated and tagged on a frame-by-frame basis. Then videos are split into disjoint databases: the larger serving as the training set and the smaller serving as a validation set. The purpose of the latter is to carry out ‘early stopping’ during the training procedure. Data augmentation is performed on the training frames only. After training, the resulting frame classification model can be used for prediction on new videos.
Figure 3Illustration of the real-time prediction on a new video. Individual frames from the video are presented to the classification model (resulting from the training procedure), whose output is then processed by the credibility update mechanism. The result is a class probability for each frame (where the class may be one of ‘NICE Type 1’, ‘NICE Type 2’, ‘No Polyp’, ‘Unsuitable’), as well as a credibility score between 0% and 100%. NICE, narrow band imaging International Colorectal Endoscopic.
Assignment of narrow band imaging International Colorectal Endoscopic classification (NICE) type 1 vs NICE type 2 compared with the pathology determined histology
| Predicted by the model | |||
|---|---|---|---|
| NICE type 1 | NICE type 2 | ||
| Pathology | Hyperplastic | 33 | 7 |
| Adenoma | 1 | 65 | |
Figure 4(A) Screen shot of the model during the evaluation of a NICE type 1 lesion (hyperplastic polyp). The display shows the type determined by the model (type 1) and the probability (100%). (B) Screen shot of the model in the evaluation of a NICE type 2 lesion (conventional adenoma). The display shows the type 2 determined by the model and the probability (100%) (see video). NICE, narrow band imaging International Colorectal Endoscopic.