| Literature DB >> 33203065 |
Dima M Alalharith1, Hajar M Alharthi1, Wejdan M Alghamdi1, Yasmine M Alsenbel1, Nida Aslam1, Irfan Ullah Khan1, Suliman Y Shahin2, Simona Dianišková3, Muhanad S Alhareky4, Kasumi K Barouch5.
Abstract
Computer-based technologies play a central role in the dentistry field, as they present many methods for diagnosing and detecting various diseases, such as periodontitis. The current study aimed to develop and evaluate the state-of-the-art object detection and recognition techniques and deep learning algorithms for the automatic detection of periodontal disease in orthodontic patients using intraoral images. In this study, a total of 134 intraoral images were divided into a training dataset (n = 107 [80%]) and a test dataset (n = 27 [20%]). Two Faster Region-based Convolutional Neural Network (R-CNN) models using ResNet-50 Convolutional Neural Network (CNN) were developed. The first model detects the teeth to locate the region of interest (ROI), while the second model detects gingival inflammation. The detection accuracy, precision, recall, and mean average precision (mAP) were calculated to verify the significance of the proposed model. The teeth detection model achieved an accuracy, precision, recall, and mAP of 100 %, 100%, 51.85%, and 100%, respectively. The inflammation detection model achieved an accuracy, precision, recall, and mAP of 77.12%, 88.02%, 41.75%, and 68.19%, respectively. This study proved the viability of deep learning models for the detection and diagnosis of gingivitis in intraoral images. Hence, this highlights its potential usability in the field of dentistry and aiding in reducing the severity of periodontal disease globally through preemptive non-invasive diagnosis.Entities:
Keywords: convolutional neural networks; deep learning; gingivitis; periodontal disease
Mesh:
Year: 2020 PMID: 33203065 PMCID: PMC7697132 DOI: 10.3390/ijerph17228447
Source DB: PubMed Journal: Int J Environ Res Public Health ISSN: 1660-4601 Impact factor: 3.390
Figure 1Architecture of the Faster R-CNN model.
Figure 2The cropping algorithm steps. (a): Computing the height and width of the bounding box. (b): Adding k to the upper bound of the bounding box can capture the upper gingiva. (c): Expanding the upper and lower bounds of the bounding box. (d): A successful capture of the gingival area. (e): Narrowing the width of the bounding box to capture the “Big M” region. (f): A successful capture of the “Big M” region.
Figure 3Proposed methodology architecture.
Figure 4Computing the Intersection over Union (IoU).
Accuracy, precision, recall, and mean average precision (mAP) for both the teeth detection model and inflammation detection model.
| Training * | Testing * | Accuracy | Precision | Recall | mAP | ||
|---|---|---|---|---|---|---|---|
|
| Teeth | 107 | 27 | 100% | 100% | 51.85% | 100% |
|
| Inflamed | 226 | 79 | 78.46% | 87.14% | 35.05% | 57.44% |
| Non-Inflamed | 416 | 83 | 75.79% | 88.91% | 48.47% | 78.94% | |
| Overall Total | 642 | 162 | 77.12% | 88.02% | 41.75% | 68.19% |
* Number of bounding boxes/annotations.
Figure 5Sample images correctly annotated. (A): all six regions are correctly detected. (B): distal in tooth No. 11 incorrectly detected as inflamed. (C): distal in tooth No. 11 not detected; middle in tooth No. 21 incorrectly detected as inflamed.
The results of the inflammation detection model for 3 patients. The detection results match the clinical findings except for the highlighted regions (GT = ground truth).
| Patient | Tooth No.11 | Tooth No.21 | ||||
|---|---|---|---|---|---|---|
| Distal | Middle | Mesial | Mesial | Middle | Distal | |
|
| Non-Inflamed | Non-Inflamed | Inflamed | Inflamed | Non-Inflamed | Non-Inflamed |
|
| Inflamed (GT: Non-Inflamed) | Non-Inflamed | Non-Inflamed | Non-Inflamed | Non-Inflamed | Non-Inflamed |
|
| Not Detected (GT: Non-Inflamed) | Non-Inflamed | Inflamed | Inflamed | Inflamed (GT: Non-Inflamed) | Inflamed |