| Literature DB >> 30460130 |
Bofan Song1,2, Sumsum Sunny3,4, Ross D Uthoff1, Sanjana Patrick5, Amritha Suresh3,4, Trupti Kolur3, G Keerthi6, Afarin Anbarani7, Petra Wilder-Smith7, Moni Abraham Kuriakose3,4, Praveen Birur5,6, Jeffrey J Rodriguez8, Rongguang Liang1,9.
Abstract
With the goal to screen high-risk populations for oral cancer in low- and middle-income countries (LMICs), we have developed a low-cost, portable, easy to use smartphone-based intraoral dual-modality imaging platform. In this paper we present an image classification approach based on autofluorescence and white light images using deep learning methods. The information from the autofluorescence and white light image pair is extracted, calculated, and fused to feed the deep learning neural networks. We have investigated and compared the performance of different convolutional neural networks, transfer learning, and several regularization techniques for oral cancer classification. Our experimental results demonstrate the effectiveness of deep learning methods in classifying dual-modal images for oral cancer detection.Entities:
Year: 2018 PMID: 30460130 PMCID: PMC6238918 DOI: 10.1364/BOE.9.005318
Source DB: PubMed Journal: Biomed Opt Express ISSN: 2156-7085 Impact factor: 3.732