Fahed Jubair1, Omar Al-Karadsheh2, Dimitrios Malamos3, Samara Al Mahdi2, Yusser Saad2, Yazan Hassona2. 1. Computer Engineering Department, School of Engineering, The University of Jordan, Amman, Jordan. 2. Department of Oral and Maxillofacial Surgery, Oral Medicine, and Periodontics, School of Dentistry, The University of Jordan, Amman, Jordan. 3. Oral Medicine Clinic, 1st Regional Health District of Attica, National Organization for the Provision of Health Services, Athens, Greece.
Abstract
OBJECTIVES: To develop a lightweight deep convolutional neural network (CNN) for binary classification of oral lesions into benign and malignant or potentially malignant using standard real-time clinical images. METHODS: A small deep CNN, that uses a pretrained EfficientNet-B0 as a lightweight transfer learning model, was proposed. A data set of 716 clinical images was used to train and test the proposed model. Accuracy, specificity, sensitivity, receiver operating characteristics (ROC) and area under curve (AUC) were used to evaluate performance. Bootstrapping with 120 repetitions was used to calculate arithmetic means and 95% confidence intervals (CIs). RESULTS: The proposed CNN model achieved an accuracy of 85.0% (95% CI: 81.0%-90.0%), a specificity of 84.5% (95% CI: 78.9%-91.5%), a sensitivity of 86.7% (95% CI: 80.4%-93.3%) and an AUC of 0.928 (95% CI: 0.88-0.96). CONCLUSIONS: Deep CNNs can be an effective method to build low-budget embedded vision devices with limited computation power and memory capacity for diagnosis of oral cancer. Artificial intelligence (AI) can improve the quality and reach of oral cancer screening and early detection.
OBJECTIVES: To develop a lightweight deep convolutional neural network (CNN) for binary classification of oral lesions into benign and malignant or potentially malignant using standard real-time clinical images. METHODS: A small deep CNN, that uses a pretrained EfficientNet-B0 as a lightweight transfer learning model, was proposed. A data set of 716 clinical images was used to train and test the proposed model. Accuracy, specificity, sensitivity, receiver operating characteristics (ROC) and area under curve (AUC) were used to evaluate performance. Bootstrapping with 120 repetitions was used to calculate arithmetic means and 95% confidence intervals (CIs). RESULTS: The proposed CNN model achieved an accuracy of 85.0% (95% CI: 81.0%-90.0%), a specificity of 84.5% (95% CI: 78.9%-91.5%), a sensitivity of 86.7% (95% CI: 80.4%-93.3%) and an AUC of 0.928 (95% CI: 0.88-0.96). CONCLUSIONS: Deep CNNs can be an effective method to build low-budget embedded vision devices with limited computation power and memory capacity for diagnosis of oral cancer. Artificial intelligence (AI) can improve the quality and reach of oral cancer screening and early detection.
Authors: Elvis Duran-Sierra; Shuna Cheng; Rodrigo Cuenca; Beena Ahmed; Jim Ji; Vladislav V Yakovlev; Mathias Martinez; Moustafa Al-Khalil; Hussain Al-Enazi; Yi-Shing Lisa Cheng; John Wright; Carlos Busso; Javier A Jo Journal: Cancers (Basel) Date: 2021-09-23 Impact factor: 6.575
Authors: Rasheed Omobolaji Alabi; Alhadi Almangush; Mohammed Elmusrati; Ilmo Leivo; Antti Mäkitie Journal: Int J Environ Res Public Health Date: 2022-07-08 Impact factor: 4.614
Authors: Mohanad A Deif; Hani Attar; Ayman Amer; Ismail A Elhaty; Mohammad R Khosravi; Ahmed A A Solyman Journal: Comput Intell Neurosci Date: 2022-09-30
Authors: Rasheed Omobolaji Alabi; Ibrahim O Bello; Omar Youssef; Mohammed Elmusrati; Antti A Mäkitie; Alhadi Almangush Journal: Front Oral Health Date: 2021-07-26