| Literature DB >> 36236476 |
Yassir Edrees Almalki1, Amsa Imam Din2, Muhammad Ramzan2, Muhammad Irfan3, Khalid Mahmood Aamir2, Abdullah Almalki4, Saud Alotaibi4, Ghada Alaglan5, Hassan A Alshamrani6, Saifur Rahman3.
Abstract
The teeth are the most challenging material to work with in the human body. Existing methods for detecting teeth problems are characterised by low efficiency, the complexity of the experiential operation, and a higher level of user intervention. Older oral disease detection approaches were manual, time-consuming, and required a dentist to examine and evaluate the disease. To address these concerns, we propose a novel approach for detecting and classifying the four most common teeth problems: cavities, root canals, dental crowns, and broken-down root canals, based on the deep learning model. In this study, we apply the YOLOv3 deep learning model to develop an automated tool capable of diagnosing and classifying dental abnormalities, such as dental panoramic X-ray images (OPG). Due to the lack of dental disease datasets, we created the Dental X-rays dataset to detect and classify these diseases. The size of datasets used after augmentation was 1200 images. The dataset comprises dental panoramic images with dental disorders such as cavities, root canals, BDR, dental crowns, and so on. The dataset was divided into 70% training and 30% testing images. The trained model YOLOv3 was evaluated on test images after training. The experiments demonstrated that the proposed model achieved 99.33% accuracy and performed better than the existing state-of-the-art models in terms of accuracy and universality if we used our datasets on other models.Entities:
Keywords: BDR; OPG; YOLO; annotation; augmentation; deep learning; dentistry; medical imaging
Mesh:
Year: 2022 PMID: 36236476 PMCID: PMC9572157 DOI: 10.3390/s22197370
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Existing models for dental diseases.
| Year | Modality | Dataset Size | Model | Accuracy | Authors |
|---|---|---|---|---|---|
| 2021 | OPG | 100 | ResNet | 93.2% | Chisako Muramatsu et al. |
| 2021 | OPG | 708 | CNN | 88.89% | Jun-Young Cha et al. |
| 2021 | OPG | 420 | CNN | 90.36% | Liu et al. |
| 2021 | OPG | 300 | CNN | 82.7% | Byung Su et al. |
| 2020 | OPG | 206 | CNN & RCNN | 90% | Hassan Aqeel Khan et al. |
| 2020 | OPG | 340 | CNN | 93% | Chang et al. |
| 2020 | OPG | 83 | SVM | 93.6% | Abdalla-Aslan et al. |
| 2020 | OPG | 100 | CNN | 81% | Thanathorn Won et al. |
| 2020 | OPG | 680 | CNN & VGG16 | 84% | Lee et al. |
| 2019 | OPG | 353 | CNN | 81% | Krois et al. |
| 2020 | OPG | 300 | CNN | 93% | Fukuda et al. |
| 2019 | OPG | 85 | Deep Feedforward CNN model | 81% | Krois et al. |
| 2019 | OPG | 200 | CNN | 87% | Bouchahma et al. |
Figure 1Proposed architecture’s workflow for the teeth diseases classification.
Figure 2Custom dataset of panoramic X-rays.
Figure 3Custom dataset after augmentation.
Figure 4Labeling process with LabelImg.
Figure 5Annotation file in .txt format.
Structure of the Darknet 53.
| Layers | Filter Size | Repeat | Output Size | |
|---|---|---|---|---|
| Convolutional | 32 | 3 × 3/1 | 1 | 416 × 416 |
| Convolutional | 64 | 3 × 3/2 | 1 | 208 × 208 |
| Convolutional | 32 | 1 × 1/1 |
| 208 × 208 |
| Convolutional | 128 | 3 × 3/2 | 1 | 104 × 104 |
| Convolutional | 64 | 1 × 1/1 |
| 104 × 104 |
| Convolutional | 256 | 3 × 3/2 | 52 × 52 | |
| Convolutional | 128 | 1 × 1/1 |
| 52 × 52 |
| Convolutional | 512 | 3 × 3/2 | 26 × 26 | |
| Convolutional | 256 | 1 × 1/1 |
| 26 × 26 |
| Convolutional | 1024 | 3 × 3/2 | 1 | 13 × 13 |
| Convolutional | 512 | 1 × 1/1 |
| 13 × 13 |
Parameters of the custom configuration file.
| Parameters | Values |
|---|---|
| Batch | 64 |
| Subdivisions | 120 |
| Width | 416 |
| Height | 416 |
| Channel | 3 |
| Learning rate | 0.001 |
| Max batches | 80,000 |
| Steps | 7800, 8200 |
| Classes | 4 |
| Filters | 27 |
Google Collab Parameters.
| Parameters | Values |
|---|---|
| CUDA | 10,010 |
| cuDNN | 7.6.5 |
| GPU | NVIDIA Tesla K80 GPU |
| RAM | 12 GB |
| DISK Space | 68 GB |
Figure 6Training results chart representation for mAP, F1-Score, recall, precision, IOU.
Training results of F1-score, recall, IOU, precision, mAP.
| Iteration | F1 Score | Recall | IOU | Precision | mAP |
|---|---|---|---|---|---|
|
| 0.57 | 0.55 | 39.33% | 0.6 | 54.87% |
|
| 0.88 | 0.92 | 60.95% | 0.85 | 91.22% |
|
| 0.93 | 0.95 | 72.37% | 0.92 | 95.43% |
|
| 0.94 | 0.95 | 73.28% | 0.93 | 96.86% |
|
| 0.97 | 0.98 | 79.49% | 0.95 | 99.55% |
|
| 0.97 | 0.98 | 82.58% | 0.96 | 99.55% |
|
| 0.97 | 0.97 | 80.7% | 0.96 | 99.26% |
|
| 0.97 | 0.98 | 84.57% | 0.96 | 99.53% |
Figure 7Testing results chart representation for mAP, F1-Score, recall, precision, IOU.
Testing results of F1-score, recall, IOU, precision, mAP.
| Iteration | F1 Score | Recall | IOU | Precision | mAP |
|---|---|---|---|---|---|
| 1000 | 0.59 | 0.56 | 40.19% | 0.69 | 54.12% |
| 2000 | 0.89 | 0.92 | 61.28% | 0.89 | 91.26% |
| 3000 | 0.96 | 0.96 | 72.27% | 0.94 | 95.93% |
| 4000 | 0.97 | 0.96 | 73.22% | 0.94 | 97.32% |
| 5000 | 0.97 | 0.98 | 79.45% | 0.96 | 99.58% |
| 6000 | 0.97 | 0.98 | 82.55% | 0.98 | 99.58% |
| 7000 | 0.99 | 0.97 | 80.69% | 0.98 | 99.35% |
| 8000 | 0.99 | 0.98 | 84.56% | 0.99 | 99.33% |
Figure 8Predictions along bounding boxes using the best weight.
Comparative study of the proposed model with recent DL models.
| Year | Dataset Size | Model | Accuracy | Authors |
|---|---|---|---|---|
| 2019 [ | 800 | Deep neural | Accuracy: 0.69 and | Kim et al. |
| 2018 [ | 800 | Label tree with cascade network structure using CNN | F-score: 0.959, Precision: 0.958, | Zhang et al. |
| 2020 [ | 105 | By using the Back-propagation | Accuracy: 0.971, ROC: 0.987, PRC: −0.987 learning rate value: 0.4, | Geetha, Aprameya & |
| 2019 [ | 300 | DetectNet with DIGITS | Precision: 0.93, Recall: 0.75, | Fukuda et al. |
| 2020 [ | 100 | CNN (Resnet 50) | Sensitivity: 0.964, Average accuracy: 0.872, (multisided models): 0.932 | Muramatsu et al. |
| 2022 [ | 846 | Mask R-CNN model | F1 score: 0.875, Precision: 0.858, Recall: 0.893, Mean ‘IoU’: 0.877 | Lee et al. |
| 2020 [ | 83 | SVM | Accuracy: 93.6% | Abdalla-Aslan |
| 2019 [ | 353 | CNN | Accuracy: 81% | Krois et al. |
| 2020 [ | 340 | CNN | Accuracy: 93% | Chang et al. |
| 2021 [ | 708 | CNN | Accuracy: 88.89% | Jun-Young et al. |
| 2021 [ | 300 | CNN | Accuracy: 82.7% | Byung Su et al. |
|
|
|
|
|
|