Yoshiko Ariji1, Yudai Yanashita2, Syota Kutsuna2, Chisako Muramatsu3, Motoki Fukuda4, Yoshitaka Kise4, Michihito Nozawa4, Chiaki Kuwada5, Hiroshi Fujita6, Akitoshi Katsumata7, Eiichiro Ariji8. 1. Associate Professor, Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan. Electronic address: yoshiko@dpc.agu.ac.jp. 2. Postgraduate student, Department of Electrical, Electronic and Computer Faculty of Engineering, Gifu University, Gifu, Japan. 3. Associate Professor, Department of Electrical, Electronic and Computer Faculty of Engineering, Gifu University, Gifu, Japan; Currently, Faculty of Data Science, Shiga University, Shiga, Japan. 4. Associate Professor, Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan. 5. Part-Time lecturer, Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan. 6. Professor, Department of Electrical, Electronic and Computer Faculty of Engineering, Gifu University, Gifu, Japan. 7. Professor, Department of Oral Radiology, Asahi University School of Dentistry, Mizuho, Japan. 8. Professor, Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan.
Abstract
OBJECTIVE: The aim of this study was to investigate whether a deep learning object detection technique can automatically detect and classify radiolucent lesions in the mandible on panoramic radiographs. STUDY DESIGN: Panoramic radiographs of patients with mandibular radiolucent lesions of 10 mm or greater, including ameloblastomas, odontogenic keratocysts, dentigerous cysts, radicular cysts, and simple bone cysts, were included. Lesion labels, including region of interest coordinates, were created in text format. In total, 210 training images and labels were imported into the deep learning GPU training system (DIGITS). A learning model was created using the deep neural network DetectNet. Two testing data sets (testing 1 and 2) were applied to the learning model. Similarities and differences between the prediction and ground-truth images were evaluated using Intersection over Union (IoU). Sensitivity and false-positive rate per image were calculated using an IoU threshold of 0.6. The detection performance for each disease was assessed using multiclass learning. RESULTS: Sensitivity was 0.88 for both testing 1 and 2. The false-positive rate per image was 0.00 for testing 1 and 0.04 for testing 2. The best combination of detection and classification sensitivity occurred with dentigerous cysts. CONCLUSIONS: Radiolucent lesions of the mandible can be detected with high sensitivity using deep learning.
OBJECTIVE: The aim of this study was to investigate whether a deep learning object detection technique can automatically detect and classify radiolucent lesions in the mandible on panoramic radiographs. STUDY DESIGN: Panoramic radiographs of patients with mandibular radiolucent lesions of 10 mm or greater, including ameloblastomas, odontogenic keratocysts, dentigerous cysts, radicular cysts, and simple bone cysts, were included. Lesion labels, including region of interest coordinates, were created in text format. In total, 210 training images and labels were imported into the deep learning GPU training system (DIGITS). A learning model was created using the deep neural network DetectNet. Two testing data sets (testing 1 and 2) were applied to the learning model. Similarities and differences between the prediction and ground-truth images were evaluated using Intersection over Union (IoU). Sensitivity and false-positive rate per image were calculated using an IoU threshold of 0.6. The detection performance for each disease was assessed using multiclass learning. RESULTS: Sensitivity was 0.88 for both testing 1 and 2. The false-positive rate per image was 0.00 for testing 1 and 0.04 for testing 2. The best combination of detection and classification sensitivity occurred with dentigerous cysts. CONCLUSIONS: Radiolucent lesions of the mandible can be detected with high sensitivity using deep learning.