Li Tong1, Hang Wu2, May D Wang1,3. 1. Department of Biomedical Engineering, Georgia Institute of Technology, Emory University, Atlanta, Georgia, USA. 2. Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA. 3. Departments of Electrical and Computer Engineering, Computational Science and Engineering, Winship Cancer Institute, Parker H. Petit Institute for Bioengineering and Biosciences, Institute of People and Technology, Georgia Institute of Technology and Emory University, Atlanta, Georgia, USA.
Abstract
OBJECTIVE: This article presents a novel method of semisupervised learning using convolutional autoencoders for optical endomicroscopic images. Optical endomicroscopy (OE) is a newly emerged biomedical imaging modality that can support real-time clinical decisions for the grade of dysplasia. To enable real-time decision making, computer-aided diagnosis (CAD) is essential for its high speed and objectivity. However, traditional supervised CAD requires a large amount of training data. Compared with the limited number of labeled images, we can collect a larger number of unlabeled images. To utilize these unlabeled images, we have developed a Convolutional AutoEncoder based Semi-supervised Network (CAESNet) for improving the classification performance. MATERIALS AND METHODS: We applied our method to an OE dataset collected from patients undergoing endoscope-based confocal laser endomicroscopy procedures for Barrett's esophagus at Emory Hospital, which consists of 429 labeled images and 2826 unlabeled images. Our CAESNet consists of an encoder with 5 convolutional layers, a decoder with 5 transposed convolutional layers, and a classification network with 2 fully connected layers and a softmax layer. In the unsupervised stage, we first update the encoder and decoder with both labeled and unlabeled images to learn an efficient feature representation. In the supervised stage, we further update the encoder and the classification network with only labeled images for multiclass classification of the OE images. RESULTS: Our proposed semisupervised method CAESNet achieves the best average performance for multiclass classification of OE images, which surpasses the performance of supervised methods including standard convolutional networks and convolutional autoencoder network. CONCLUSIONS: Our semisupervised CAESNet can efficiently utilize the unlabeled OE images, which improves the diagnosis and decision making for patients with Barrett's esophagus.
OBJECTIVE: This article presents a novel method of semisupervised learning using convolutional autoencoders for optical endomicroscopic images. Optical endomicroscopy (OE) is a newly emerged biomedical imaging modality that can support real-time clinical decisions for the grade of dysplasia. To enable real-time decision making, computer-aided diagnosis (CAD) is essential for its high speed and objectivity. However, traditional supervised CAD requires a large amount of training data. Compared with the limited number of labeled images, we can collect a larger number of unlabeled images. To utilize these unlabeled images, we have developed a Convolutional AutoEncoder based Semi-supervised Network (CAESNet) for improving the classification performance. MATERIALS AND METHODS: We applied our method to an OE dataset collected from patients undergoing endoscope-based confocal laser endomicroscopy procedures for Barrett's esophagus at Emory Hospital, which consists of 429 labeled images and 2826 unlabeled images. Our CAESNet consists of an encoder with 5 convolutional layers, a decoder with 5 transposed convolutional layers, and a classification network with 2 fully connected layers and a softmax layer. In the unsupervised stage, we first update the encoder and decoder with both labeled and unlabeled images to learn an efficient feature representation. In the supervised stage, we further update the encoder and the classification network with only labeled images for multiclass classification of the OE images. RESULTS: Our proposed semisupervised method CAESNet achieves the best average performance for multiclass classification of OE images, which surpasses the performance of supervised methods including standard convolutional networks and convolutional autoencoder network. CONCLUSIONS: Our semisupervised CAESNet can efficiently utilize the unlabeled OE images, which improves the diagnosis and decision making for patients with Barrett's esophagus.
Authors: Esther E Bron; Marion Smits; Wiesje M van der Flier; Hugo Vrenken; Frederik Barkhof; Philip Scheltens; Janne M Papma; Rebecca M E Steketee; Carolina Méndez Orellana; Rozanna Meijboom; Madalena Pinto; Joana R Meireles; Carolina Garrett; António J Bastos-Leite; Ahmed Abdulkadir; Olaf Ronneberger; Nicola Amoroso; Roberto Bellotti; David Cárdenas-Peña; Andrés M Álvarez-Meza; Chester V Dolph; Khan M Iftekharuddin; Simon F Eskildsen; Pierrick Coupé; Vladimir S Fonov; Katja Franke; Christian Gaser; Christian Ledig; Ricardo Guerrero; Tong Tong; Katherine R Gray; Elaheh Moradi; Jussi Tohka; Alexandre Routier; Stanley Durrleman; Alessia Sarica; Giuseppe Di Fatta; Francesco Sensi; Andrea Chincarini; Garry M Smith; Zhivko V Stoyanov; Lauge Sørensen; Mads Nielsen; Sabina Tangaro; Paolo Inglese; Christian Wachinger; Martin Reuter; John C van Swieten; Wiro J Niessen; Stefan Klein Journal: Neuroimage Date: 2015-01-31 Impact factor: 6.556
Authors: Cadman L Leggett; Emmanuel C Gorospe; Daniel K Chan; Prasuna Muppa; Victoria Owens; Thomas C Smyrk; Marlys Anderson; Lori S Lutzke; Guillermo Tearney; Kenneth K Wang Journal: Gastrointest Endosc Date: 2015-09-03 Impact factor: 9.427
Authors: Lee A D Cooper; Jun Kong; David A Gutman; Fusheng Wang; Jingjing Gao; Christina Appin; Sharath Cholleti; Tony Pan; Ashish Sharma; Lisa Scarpace; Tom Mikkelsen; Tahsin Kurc; Carlos S Moreno; Daniel J Brat; Joel H Saltz Journal: J Am Med Inform Assoc Date: 2012-01-24 Impact factor: 4.497
Authors: Georgios Petmezas; Leandros Stefanopoulos; Vassilis Kilintzis; Andreas Tzavelis; John A Rogers; Aggelos K Katsaggelos; Nicos Maglaveras Journal: JMIR Med Inform Date: 2022-08-15