| Literature DB >> 36178456 |
Shawn Mathew1, Saad Nadeem2, Arie Kaufman1.
Abstract
Automated analysis of optical colonoscopy (OC) video frames (to assist endoscopists during OC) is challenging due to variations in color, lighting, texture, and specular reflections. Previous methods either remove some of these variations via preprocessing (making pipelines cumbersome) or add diverse training data with annotations (but expensive and time-consuming). We present CLTS-GAN, a new deep learning model that gives fine control over color, lighting, texture, and specular reflection synthesis for OC video frames. We show that adding these colonoscopy-specific augmentations to the training data can improve state-of-the-art polyp detection/segmentation methods as well as drive next generation of OC simulators for training medical students. The code and pre-trained models for CLTS-GAN are available on Computational Endoscopy Platform GitHub (https://github.com/nadeemlab/CEP).Entities:
Keywords: Augmentation; Colonoscopy; Polyp Detection
Year: 2022 PMID: 36178456 PMCID: PMC9518696 DOI: 10.1007/978-3-031-16449-1_49
Source DB: PubMed Journal: Med Image Comput Comput Assist Interv