Hassan Aqeel Khan1, Muhammad Ali Haider2, Hassan Ali Ansari3, Hamna Ishaq2, Amber Kiyani4, Kanwal Sohail5, Muhammad Muhammad6, Syed Ali Khurram7. 1. Assistant Professor, College of Computer Science and Engineering, University of Jeddah, Kingdom of Saudi Arabia. 2. Electrical Engineering student, National University of Sciences and Technology, Islamabad, Pakistan. 3. National University of Sciences and Technology, Islamabad, Pakistan. 4. Assistant Professor, Riphah International University, Islamabad, Pakistan. Electronic address: Amber.kiyani@riphah.edu.pk. 5. Demonstrator, Riphah International University, Islamabad, Pakistan. 6. Assistant Professor, Riphah International University, Islamabad, Pakistan. 7. Senior Clinical Lecturer, Consultant Oral Pathologist, University of Sheffield, Sheffield, UK.
Abstract
OBJECTIVE: The aim of this study was to investigate automated feature detection, segmentation, and quantification of common findings in periapical radiographs (PRs) by using deep learning (DL)-based computer vision techniques. STUDY DESIGN: Caries, alveolar bone recession, and interradicular radiolucencies were labeled on 206 digital PRs by 3 specialists (2 oral pathologists and 1 endodontist). The PRs were divided into "Training and Validation" and "Test" data sets consisting of 176 and 30 PRs, respectively. Multiple transformations of image data were used as input to deep neural networks during training. Outcomes of existing and purpose-built DL architectures were compared to identify the most suitable architecture for automated analysis. RESULTS: The U-Net architecture and its variant significantly outperformed Xnet and SegNet in all metrics. The overall best performing architecture on the validation data set was "U-Net+Densenet121" (mean intersection over union [mIoU] = 0.501; Dice coefficient = 0.569). Performance of all architectures degraded on the "Test" data set; "U-Net" delivered the best performance (mIoU = 0.402; Dice coefficient = 0.453). Interradicular radiolucencies were the most difficult to segment. CONCLUSIONS: DL has potential for automated analysis of PRs but warrants further research. Among existing off-the-shelf architectures, U-Net and its variants delivered the best performance. Further performance gains can be obtained via purpose-built architectures and a larger multicentric cohort.
OBJECTIVE: The aim of this study was to investigate automated feature detection, segmentation, and quantification of common findings in periapical radiographs (PRs) by using deep learning (DL)-based computer vision techniques. STUDY DESIGN: Caries, alveolar bone recession, and interradicular radiolucencies were labeled on 206 digital PRs by 3 specialists (2 oral pathologists and 1 endodontist). The PRs were divided into "Training and Validation" and "Test" data sets consisting of 176 and 30 PRs, respectively. Multiple transformations of image data were used as input to deep neural networks during training. Outcomes of existing and purpose-built DL architectures were compared to identify the most suitable architecture for automated analysis. RESULTS: The U-Net architecture and its variant significantly outperformed Xnet and SegNet in all metrics. The overall best performing architecture on the validation data set was "U-Net+Densenet121" (mean intersection over union [mIoU] = 0.501; Dice coefficient = 0.569). Performance of all architectures degraded on the "Test" data set; "U-Net" delivered the best performance (mIoU = 0.402; Dice coefficient = 0.453). Interradicular radiolucencies were the most difficult to segment. CONCLUSIONS: DL has potential for automated analysis of PRs but warrants further research. Among existing off-the-shelf architectures, U-Net and its variants delivered the best performance. Further performance gains can be obtained via purpose-built architectures and a larger multicentric cohort.
Authors: Raymond P Danks; Sophia Bano; Anastasiya Orishko; Hong Jin Tan; Federico Moreno Sancho; Francesco D'Aiuto; Danail Stoyanov Journal: Int J Comput Assist Radiol Surg Date: 2021-06-21 Impact factor: 2.924