| Literature DB >> 35494840 |
Umer Rashid1, Aiman Javid1, Abdur Rehman Khan1, Leo Liu2, Adeel Ahmed1, Osman Khalid3, Khalid Saleem1, Shaista Meraj4, Uzair Iqbal5, Raheel Nawaz2.
Abstract
Nearly 3.5 billion humans have oral health issues, including dental caries, which requires dentist-patient exposure in oral examinations. The automated approaches identify and locate carious regions from dental images by localizing and processing either colored photographs or X-ray images taken via specialized dental photography cameras. The dentists' interpretation of carious regions is difficult since the detected regions are masked using solid coloring and limited to a particular dental image type. The software-based automated tools to localize caries from dental images taken via ordinary cameras requires further investigation. This research provided a mixed dataset of dental photographic (colored or X-ray) images, instantiated a deep learning approach to enhance the existing dental image carious regions' localization procedure, and implemented a full-fledged tool to present carious regions via simple dental images automatically. The instantiation mainly exploits the mixed dataset of dental images (colored photographs or X-rays) collected from multiple sources and pre-trained hybrid Mask RCNN to localize dental carious regions. The evaluations performed by the dentists showed that the correctness of annotated datasets is up to 96%, and the accuracy of the proposed system is between 78% and 92%. Moreover, the system achieved the overall satisfaction level of dentists above 80%.Entities:
Keywords: Deep learning; Dental cavities; Dental image processing; Mask RCNN
Year: 2022 PMID: 35494840 PMCID: PMC9044255 DOI: 10.7717/peerj-cs.888
Source DB: PubMed Journal: PeerJ Comput Sci ISSN: 2376-5992
A summary of the dataset containing colored photographic, X-ray, and mixed dental images.
| Image type | No. of instances | Source | |
|---|---|---|---|
|
| Colored | 90 | Digital lab Spain Granada. |
|
| X-ray | 936 | 120 images from Vahab Archives and 816 images from CMH |
|
| Mixed | 210 | Mixing of DS1 and 120 images from Vahab Archives. |
Figure 1Overview of proposed carious regions detection and localization.
Figure 2Architectural design of proposed Mask-RCNN approach.
Figure 3Proposed dental cavity detection tool: (A) image upload, (B) submit button, (C) output image area, (D) X-ray image output, and (E) colored image output.
Dataset validation scores between dentists (D1 and D2).
| Dataset | Images | Dentist | Precision | ||
|---|---|---|---|---|---|
| Local | Public | IoU = 0.6 | IoU = 0.7 | ||
| X-ray | 816 | 120 |
| 84.45% | 79.54% |
|
| 84.41% | 79.52% | |||
| Color | – | 90 |
| 95.75% | 89.06% |
|
| 95.06% | 89.00% | |||
M-RCNN model configuration by employing colored, X-ray, and combined dental images dataset.
| Configuration | Description |
|---|---|
| Learning rate | 0.01 |
| # of Epochs | 2.0 |
| Batch size | 4.0 |
Figure 4Evaluation results for color photographic, X-ray, and mixed datasets.
Correctness validation scores in datasets DS1, DS2, and DS2 containing colored photographic, X-ray radiographic, and mixed dental images, respectively by employing M-RCNN model in terms of precision (P) and recall (R) measures.
| DS1: Colored dental dataset | DS2: X-ray dental dataset | DS3: Mixed dataset | ||||
|---|---|---|---|---|---|---|
| P | R | P | R | P | R | |
| @IoU = 0.6 | 95.75 | 96.24 | 84.45 | 85.05 | 81.02 | 83.78 |
| @IoU = 0.7 | 89.06 | 92.09 | 79.54 | 80.45 | 76.02 | 78.78 |
SUS analysis of dental cavity detection tool.
| Questions | (D)entist | (A)ssistant | (S)tudent | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| D1 | D2 | D3 | A1 | A2 | A3 | A4 | S1 | S2 | S3 | S4 | S5 | S6 | S7 | S8 | S9 | S10 | |
| I think that I would like to use this tool | 4 | 5 | 5 | 5 | 5 | 4 | 4 | 4 | 4 | 4 | 5 | 4 | 5 | 4 | 4 | 4 | 4 |
| I found the tool | 1 | 2 | 2 | 1 | 2 | 2 | 1 | 1 | 2 | 1 | 2 | 2 | 1 | 2 | 2 | 2 | 1 |
| I thought the tool was easy to use | 4 | 5 | 5 | 5 | 5 | 4 | 5 | 5 | 4 | 4 | 5 | 4 | 4 | 5 | 5 | 5 | 5 |
| Support of a technical person is required to use this tool | 1 | 1 | 1 | 2 | 1 | 2 | 2 | 2 | 3 | 2 | 2 | 2 | 2 | 1 | 1 | 2 | 2 |
| This tool is well | 4 | 5 | 5 | 3 | 5 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 5 | 5 | 5 | 5 | 4 |
| I found inconsistency in this tool | 2 | 1 | 1 | 1 | 1 | 2 | 1 | 1 | 2 | 1 | 1 | 2 | 1 | 1 | 1 | 1 | 1 |
| People would learn to use this tool quickly | 4 | 5 | 5 | 5 | 5 | 4 | 4 | 4 | 4 | 4 | 5 | 4 | 5 | 4 | 4 | 4 | 4 |
| I found the tool very | 1 | 1 | 3 | 1 | 1 | 2 | 2 | 1 | 1 | 1 | 1 | 2 | 1 | 2 | 2 | 1 | 1 |
| I felt very confident using the tool | 4 | 5 | 4 | 5 | 5 | 4 | 4 | 5 | 5 | 5 | 5 | 4 | 5 | 5 | 5 | 4 | 4 |
| I needed to learn a lot of things before using it | 1 | 1 | 1 | 3 | 3 | 2 | 2 | 2 | 3 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 1 |
| Score | 85 | 97.5 | 90 | 87.5 | 92.5 | 75 | 82.5 | 87.5 | 75 | 85 | 90 | 75 | 92.5 | 87.5 | 87.5 | 85 | 87.5 |
|
| 90.83 | 84.38 | 85.25 | ||||||||||||||
Comparison of proposed M-RCNN model approach with state-of-the-art approaches discussed in recent years.
| Dataset | Authors | Technique | Dental image instances | Results @ Precision/Accuracy (%) | |
|---|---|---|---|---|---|
| Colored photographs |
| K-means | 60 | P: 80.00 | |
|
| J48 | 91 | P: 83.00 | ||
|
| Random Forest | 88 | A: 86.30 | ||
|
| SVM | 45 | P: 84.00 | ||
|
| Deep Conv Net | 3,932 | A: 80.00 | ||
| Proposed | M-RCNN | 90 | P: 88.02 | ||
| X-ray/grayscale radiographs |
| SVM | 100 | P: 86.70 | |
|
| VGG16 Model | 125 | P: 88.46 | ||
|
| Inception CNN | 3,000 | A: 82.00 | ||
|
| CNN | 217 | A: 84.60 | ||
|
| U-Net | 3,686 | A: 80.00 | ||
| Proposed | M-RCNN | 936 | A: 95.75 | ||
| Mixed | Proposed | M-RCNN | 210 | P: 81.02 | |