| Literature DB >> 36135159 |
Seyed-Ali Sadegh-Zadeh1, Ali Rahmani Qeranqayeh1, Elhadj Benkhalifa1, David Dyke1, Lynda Taylor1, Mahshid Bagheri1.
Abstract
BACKGROUND: Dental caries is a prevalent, complex, chronic illness that is avoidable. Better dental health outcomes are achieved as a result of accurate and early caries risk prediction in children, which also helps to avoid additional expenses and repercussions. In recent years, artificial intelligence (AI) has been employed in the medical field to aid in the diagnosis and treatment of medical diseases. This technology is a critical tool for the early prediction of the risk of developing caries. AIM: Through the development of computational models and the use of machine learning classification techniques, we investigated the potential for dental caries factors and lifestyle among children under the age of five.Entities:
Keywords: artificial intelligence; caries prediction; dental caries; dental medicine; diagnostic prediction
Year: 2022 PMID: 36135159 PMCID: PMC9497737 DOI: 10.3390/dj10090164
Source DB: PubMed Journal: Dent J (Basel) ISSN: 2304-6767
Demographic characteristics (N = 780).
| Categorical Variables | N | % |
|---|---|---|
|
| ||
|
| 180 | 23.08% |
|
| 600 | 76.92% |
|
| ||
|
| 550 | 70.51% |
|
| 230 | 29.49% |
|
| ||
|
| 330 | 42.31% |
|
| 450 | 57.69% |
|
| ||
|
| 10 | 1.28% |
|
| 770 | 98.72% |
|
| ||
|
| 0 | 0.00% |
|
| 780 | 100.00% |
|
| ||
|
| 0 | 0.00% |
|
| 780 | 100.00% |
|
| 290 | 16.48% |
|
| 0 | 0.00% |
|
| 780 | 100.00% |
|
| ||
|
| 600 | 76.92% |
|
| 180 | 23.08% |
|
| ||
|
| 600 | 76.92% |
|
| 180 | 23.08% |
|
| ||
|
| 250 | 32.05% |
|
| 530 | 67.95% |
|
| ||
|
| 330 | 42.31% |
|
| 450 | 57.69% |
|
| ||
|
| 0 | 0.00% |
|
| 780 | 100.00% |
|
| ||
|
| 0 | 0.00% |
|
| 780 | 100.00% |
|
| ||
|
| 0 | 0.00% |
|
| 780 | 100.00% |
|
| ||
|
| 430 | 55.13% |
|
| 350 | 44.87% |
|
| 0 | 0.00% |
|
| ||
|
| 390 | 50.00% |
|
| 390 | 50.00% |
|
| 0 | 0.00% |
Figure 1Number of patients in three levels of Low, Moderate, and High Risk.
Figure 2Number of patients in two levels of Low and Moderate Risk, and High Risk.
Figure 3Accuracy of classifiers for 3 different types of risks with the Leave-One-Out Cross-Validation method.
Classifiers in detail in terms of precision, recall, and F1-score for 3 different types of risks.
| Classifier | Class | Precision (%) | Recall (%) | F1-Score (%) |
|---|---|---|---|---|
|
| High Risk | 98 | 96 | 97 |
| Moderate Risk | 50 | 67 | 57 | |
| Low Risk | 100 | 100 | 100 | |
|
| High Risk | 98 | 98 | 98 |
| Moderate Risk | 67 | 67 | 67 | |
| Low Risk | 100 | 100 | 100 | |
|
| High Risk | 98 | 96 | 97 |
| Moderate Risk | 50 | 67 | 57 | |
| Low Risk | 100 | 100 | 100 | |
|
| High Risk | 94 | 98 | 97 |
| Moderate Risk | 0 | 0 | 0 | |
| Low Risk | 100 | 100 | 100 | |
|
| High Risk | 98 | 98 | 98 |
| Moderate Risk | 67 | 67 | 67 | |
| Low Risk | 100 | 100 | 100 | |
|
| High Risk | 94 | 96 | 96 |
| Moderate Risk | 0 | 0 | 0 | |
| Low Risk | 100 | 100 | 100 | |
|
| High Risk | 98 | 96 | 97 |
| Moderate Risk | 50 | 67 | 57 | |
| Low Risk | 100 | 100 | 100 | |
|
| High Risk | 94 | 96 | 96 |
| Moderate Risk | 0 | 0 | 0 | |
| Low Risk | 100 | 100 | 100 | |
|
| High Risk | 98 | 96 | 97 |
| Moderate Risk | 50 | 67 | 57 | |
| Low Risk | 100 | 100 | 100 | |
|
| High Risk | 95 | 100 | 97 |
| Moderate Risk | 0 | 0 | 0 | |
| Low Risk | 100 | 100 | 100 |
Figure 4Accuracy of classifiers for 2 different types of risks with the Leave-One-Out Cross-Validation method.
Classifiers in detail in terms of precision, recall, and F1-score for 2 different types of risks.
| Classifier | Class | Precision (%) | Recall (%) | F1-Score (%) |
|---|---|---|---|---|
|
| Low & Moderate Risk | 90 | 95 | 93 |
| High Risk | 98 | 96 | 97 | |
|
| Low & Moderate Risk | 95 | 86 | 90 |
| High Risk | 95 | 98 | 97 | |
|
| Low & Moderate Risk | 88 | 100 | 93 |
| High Risk | 100 | 95 | 97 | |
|
| Low & Moderate Risk | 100 | 86 | 92 |
| High Risk | 95 | 100 | 97 | |
|
| Low & Moderate Risk | 95 | 95 | 95 |
| High Risk | 98 | 98 | 98 | |
|
| Low & Moderate Risk | 95 | 95 | 95 |
| High Risk | 98 | 98 | 98 | |
|
| Low & Moderate Risk | 90 | 86 | 88 |
| High Risk | 95 | 96 | 96 | |
|
| Low & Moderate Risk | 87 | 95 | 91 |
| High Risk | 98 | 95 | 96 | |
|
| Low & Moderate Risk | 95 | 95 | 95 |
| High Risk | 98 | 98 | 98 | |
|
| Low & Moderate Risk | 100 | 86 | 92 |
| High Risk | 95 | 100 | 97 |
Mean, best, and worst accuracy and standard deviation of classifiers for 2 different types of risks with the K-fold Cross-Validation method (k = 5).
| Classifier | Mean | Standard Deviation | Best | Worst |
|---|---|---|---|---|
| Decision Tree | 93.58% | 8.04 | 100.00% | 81.25% |
| Extreme Gradient Boosting | 94.92% | 7.3 | 100.00% | 81.25% |
| K-Nearest Neighbours | 92.25% | 7.4 | 100.00% | 81.25% |
| Logistic Regression | 94.92% | 7.3 | 100.00% | 81.25% |
| Multilayer Perceptron | 94.92% | 7.3 | 100.00% | 81.25% |
| Random Forest | 94.92% | 7.3 | 100.00% | 81.25% |
| Support Vector Machine (kernel = ‘linear’) | 93.58% | 8.04 | 100.00% | 81.25% |
| Support Vector Machine (kernel = ‘rbf’) | 93.58% | 6.58 | 100.00% | 81.25% |
| Support Vector Machine (kernel = ‘poly’) | 93.58% | 8.04 | 100.00% | 81.25% |
| Support Vector Machine (kernel = ‘sigmoid’) | 96.25% | 7.5 | 100.00% | 81.25% |