Yuan-Yen Chang1, Pai-Chi Li1, Ruey-Feng Chang1,2,3, Chih-Da Yao4, Yang-Yuan Chen5,6, Wen-Yen Chang7, Hsu-Heng Yen8,9,10,11,12. 1. Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan. 2. Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan. 3. Artificial Intelligence Development Center, Changhua Christian Hospital, Changhua, Taiwan. 4. Division of Gastroenterology, Lukang Christian Hospital, Changhua, Taiwan. 5. Division of Gastroenterology, Changhua Christian Hospital, Changhua, Taiwan. 6. Department of Hospitality, MingDao University, Changhua, Taiwan. 7. Department of Medical Education, National Taiwan University Hospital, Taipei, Taiwan. 8. Artificial Intelligence Development Center, Changhua Christian Hospital, Changhua, Taiwan. 91646@cch.org.tw. 9. Division of Gastroenterology, Changhua Christian Hospital, Changhua, Taiwan. 91646@cch.org.tw. 10. General Education Center, Chienkuo Technology University, Changhua, Taiwan. 91646@cch.org.tw. 11. Department of Electrical Engineering, Chung Yuan University, Taoyuan, Taiwan. 91646@cch.org.tw. 12. College of Medicine, National Chung Hsing University, Taichung, Taiwan. 91646@cch.org.tw.
Abstract
BACKGROUND: Photodocumentation during endoscopy procedures is one of the indicators for endoscopy performance quality; however, this indicator is difficult to measure and audit in the endoscopy unit. Emerging artificial intelligence technology may solve this problem, which requires a large amount of material for model development. We developed a deep learning-based endoscopic anatomy classification system through convolutional neural networks with an accelerated data preparation approach. PATIENTS AND METHODS: We retrospectively collected 8,041 images from esophagogastroduodenoscopy (EGD) procedures and labeled them using two experts for nine anatomical locations of the upper gastrointestinal tract. A base model for EGD image multiclass classification was first developed, and an additional 6,091 images were enrolled and classified by the base model. A total of 5,963 images were manually confirmed and added to develop the subsequent enhanced model. Additional internal and external endoscopy image datasets were used to test the model performance. RESULTS: The base model achieved total accuracy of 96.29%. For the enhanced model, the total accuracy was 96.64%. The overall accuracy improved with the enhanced model compared with the base model for the internal test dataset without narrowband images (93.05% vs. 91.25%, p < 0.01) or with narrowband images (92.74% vs. 90.46%, p < 0.01). The total accuracy was 92.56% of the enhanced model on the external test dataset. CONCLUSIONS: We constructed a deep learning-based model with an accelerated approach that can be used for quality control in endoscopy units. The model was also validated with both internal and external datasets with high accuracy.
BACKGROUND: Photodocumentation during endoscopy procedures is one of the indicators for endoscopy performance quality; however, this indicator is difficult to measure and audit in the endoscopy unit. Emerging artificial intelligence technology may solve this problem, which requires a large amount of material for model development. We developed a deep learning-based endoscopic anatomy classification system through convolutional neural networks with an accelerated data preparation approach. PATIENTS AND METHODS: We retrospectively collected 8,041 images from esophagogastroduodenoscopy (EGD) procedures and labeled them using two experts for nine anatomical locations of the upper gastrointestinal tract. A base model for EGD image multiclass classification was first developed, and an additional 6,091 images were enrolled and classified by the base model. A total of 5,963 images were manually confirmed and added to develop the subsequent enhanced model. Additional internal and external endoscopy image datasets were used to test the model performance. RESULTS: The base model achieved total accuracy of 96.29%. For the enhanced model, the total accuracy was 96.64%. The overall accuracy improved with the enhanced model compared with the base model for the internal test dataset without narrowband images (93.05% vs. 91.25%, p < 0.01) or with narrowband images (92.74% vs. 90.46%, p < 0.01). The total accuracy was 92.56% of the enhanced model on the external test dataset. CONCLUSIONS: We constructed a deep learning-based model with an accelerated approach that can be used for quality control in endoscopy units. The model was also validated with both internal and external datasets with high accuracy.
Authors: Ian M Gralnek; Cesare Hassan; Alanna Ebigbo; Andre Fuchs; Ulrike Beilenhoff; Giulio Antonelli; Raf Bisschops; Marianna Arvanitakis; Pradeep Bhandari; Michael Bretthauer; Michal F Kaminski; Vicente Lorenzo-Zuniga; Enrique Rodriguez de Santiago; Peter D Siersema; Tony C Tham; Konstantinos Triantafyllou; Alberto Tringali; Andrei Voiosu; George Webster; Marjon de Pater; Björn Fehrke; Mario Gazic; Tatjana Gjergek; Siiri Maasen; Wendy Waagenes; Mario Dinis-Ribeiro; Helmut Messmann Journal: Endoscopy Date: 2021-12-21 Impact factor: 10.093
Authors: Matthew D Rutter; Carlo Senore; Raf Bisschops; Dirk Domagk; Roland Valori; Michal F Kaminski; Cristiano Spada; Michael Bretthauer; Cathy Bennett; Cristina Bellisario; Silvia Minozzi; Cesare Hassan; Colin Rees; Mário Dinis-Ribeiro; Tomas Hucl; Thierry Ponchon; Lars Aabakken; Paul Fockens Journal: Endoscopy Date: 2015-12-11 Impact factor: 10.093