| Literature DB >> 35615666 |
Xin Wang1, Xiaoke Zhao2,3,4, Guangying Song1,5, Jianwei Niu2,3,4, Tianmin Xu1,5.
Abstract
Objectives: Machine learning is increasingly being used in the medical field. Based on machine learning models, the present study aims to improve the prediction performance of craniodentofacial morphological harmony judgment after orthodontic treatment and to determine the most significant factors.Entities:
Keywords: cephalometric analysis; facial harmony; machine learning; malocclusion; orthodontic treatment
Year: 2022 PMID: 35615666 PMCID: PMC9124867 DOI: 10.3389/fphys.2022.862847
Source DB: PubMed Journal: Front Physiol ISSN: 1664-042X Impact factor: 4.755
FIGURE 1Landmarks of the lateral cephalogram.
Definitions of the 42 cephalometric features.
| No. | Cephalometric variables | Definition |
|---|---|---|
| 1 | SNA | Anteroposterior position of the maxilla to the anterior cranial base (degrees) |
| 2 | SNB | Anteroposterior position of the mandible to the anterior cranial base (degrees) |
| 3 | ANB | The angle between Down’s points A and B (degrees) |
| 4 | SND | The angle between the SN and ND line (degrees) |
| 5 | U1/NA | The angle between the line through the long axis of the upper central incisor and the NA line (degrees) |
| 6 | L1/NB | The angle between the line through the long axis of the lower central incisor and NB line (degrees) |
| 7 | L1/AP | The angle between the line through the long axis of the lower central incisor and the AP line (degrees) |
| 8 | U1/L1 | The angle between the line through the long axis of the upper and lower central incisors (degrees) |
| 9 | U1/SN | The angle between the ling through the long axis of the upper central incisor and SN line (degrees) |
| 10 | U1/PP | The angle between the line through the long axis of the upper central incisor and palatal plane (degrees) |
| 11 | L1/MP | The angle between the line through the long axis of the lower central incisor and mandibular plane (degrees) |
| 12 | SN/OP | The angle between the SN line and occlusal plane (degrees) |
| 13 | GoGn/SN | The angle between the SN and GoGn line (degrees) |
| 14 | FH/NP | The angle between the Frankfort horizontal plane and NP line (degrees) |
| 15 | FH/OP | The angle between the Frankfort horizontal plane and occlusal Plane (degrees) |
| 16 | MP/FH | The angle between the mandibular plane and Frankfort horizontal plane (degrees) |
| 17 | NA/PA | The angle between the NA and PA line (degrees) |
| 18 | Y | Sella gnathion to the Frankfort horizontal plane (degrees) |
| 19 | AB/NP | The angle between the AB and NP line (degrees) |
| 20 | U1-NA | The perpendicular distance from U1 (incision superius) to the NA line (mm) |
| 21 | L1-NB | The perpendicular distance from L1 (incision inferius) to the NB line (mm) |
| 22 | Pg-NB | The perpendicular distance from pogonion to the NB line (mm) |
| 23 | SE | Distance between Sella and the foot point from the most posterior point of the condyle to the SN line (mm) |
| 24 | S-Ns-Sn | The angle between the S-Ns and Ns-Sn line (degrees) |
| 25 | S-Ns-Bs | The angle between the S-Ns and Ns-Bs line (degrees) |
| 26 | G-Sn-Pos | The angle between the G′-Sn and Sn-Pos line (degrees) |
| 27 | Ns-Prn-Pos | The angle between the Ns-Prn and Prn-Pos line (degrees) |
| 28 | NLA(Cm-Sn-UL) | The angle between the Cm-Sn and Sn-UL line (degrees) |
| 29 | AsUL-FH | The angle between the As-UL line and Frankfort horizontal plane (degrees) |
| 30 | BsLL-FH | The angle between the Bs-LL line and Frankfort horizontal plane (degrees) |
| 31 | AsUL-BsLL | The angle between the As-UL and Bs-LL line (degrees) |
| 32 | LL-Bs-Pos | The angle between the LL-Bs and Bs-Pos line (degrees) |
| 33 | Sn-Stoms | Distance between the subnasale and stomion superius (mm) |
| 34 | Stomi-Mes | Distance between the stomion inferius and soft tissue menton (mm) |
| 35 | Sn-Prn(FH) | The perpendicular distance from the pronasale to the line perpendicular to Frankfort horizontal plane through the subnasale (mm) |
| 36 | Ns-N(FH) | The perpendicular distance from nasion to the line perpendicular to the Frankfort horizontal plane through the soft tissue nasion (mm) |
| 37 | Sn-A (FH) | The perpendicular distance from subspinale to the line perpendicular to Frankfort horizontal plane through the subnasale (mm) |
| 38 | Bs-B(FH) | The perpendicular distance from the supramental to the line perpendicular to Frankfort horizontal plane through the most posterior point of mentolabial sulcus (mm) |
| 39 | ChinThickness | Distance between gnathion and the gnathion of soft tissue (mm) |
| 40 | UL-EP | The perpendicular distance from the upper labral to the E-line (pronaslae to pogonion of soft tissue) (mm) |
| 41 | LL-H | The perpendicular distance from the lower labral to the H-line (upper labral to pogonion of soft tissue) (mm) |
| 42 | LL-EP | The perpendicular distance from the lower labral to the E-line (pronasale to pogonion of soft tissue) (mm) |
FIGURE 3Cephalometric measurements of soft tissue.
FIGURE 4Diagram of the process of analyzing the quantitative evaluation, which was performed by utilizing the cephalometric features as input data and the expert evaluation scores as output data.
FIGURE 5The flowchart of the cross-validation workflow from the scratch to evaluate the performance of each model after feature selection.
FIGURE 6Pearson correlation coefficients with 42 factors.
The list of 13 methods from Scikit-learn.
| Name | Description |
|---|---|
| sklearn.linear_model. LinearRegression | It estimates the coefficients by applying Ordinary Least Squares |
| sklearn.linear_model.Lasso | It can estimate sparse coefficients, which addresses the issue of the least-squares penalty minimization with the |
| sklearn.linear_model.Ridge | It introduces a penalty with |
| sklearn.tree. DecisionTreeRegressor | It is a non-parametric supervised learning method to make predictions for a target variable by learning the decision rules inferred from the input features |
| sklearn. ensemble. GradientBoostingRegressor | It supports a series of different loss functions. Here, we take the default loss function for regression, i.e., least squares |
| sklearn.ensemble. AdaBoostRegressor | It assembles a sequence of weak learners with a weighted majority vote by taking the repeated boosting iteration |
| sklearn. ensemble. BaggingRegressor | It introduces randomization into the construction procedure of an estimator and then makes an ensemble by splitting and aggregating individual predictions of this estimator on random subsets of the original training set |
| sklearn.ensemble. RandomForestRegressor | It aims at decreasing the variance of the forest estimator by using bootstrap samples from the training set and random subsets of candidate features for node splitting |
| sklearn.ensemble. ExtraTreesRegressor | It is similar to the random forests with node splitting. However, it randomly generates thresholds for each candidate feature and picks the best of these thresholds as the splitting rule |
| sklearn.neural_network. MLPRegressor | It implements a multilayer perceptron (MLP) with no activation function in the output layer. Its output is a set of continuous values, and it takes the square error as the loss function |
| sklearn.svm.LinearSVR | It is only suitable for the linear kernel when solving regression problems |
| sklearn.svm.SVR | There are three kinds of kernels in this algorithm, i.e., linear, polynomial, and RBF kernels. Here, we take the RBF kernel |
| xgboost.XGBRegressor | It implements the Scikit-Learn Wrapper interface for XGBoost regression |
https://scikit-learn.org/stable/supervised_learning.html.
https://xgboost.readthedocs.io/en/latest/.
FIGURE 7Results of the initial model screening: the MAE of AdaBoost, ExtraTrees, and XGBoost had the lowest values within the smallest standard deviations, compared with that of the other models.
FIGURE 8Results from the total sample by using XGBoost regression, ExtraTrees regression, AdaBoost regression, and linear regression: (A1) mean absolute error (MAE); (B1) root mean square error (RMSE); and (C1) Pearson correlation coefficient. The results from the testing set sample by using XGBoost regression, ExtraTrees regression, AdaBoost regression, and linear regression: (A2) mean absolute error (MAE); (B2) root mean square error (RMSE); and (C2) Pearson correlation coefficient.
Machine learning model performance in the testing set.
| Number of features | Performance indicator | XGBoost regression | ExtraTrees regression | AdaBoost regression | Linear regression | ||||
|---|---|---|---|---|---|---|---|---|---|
| Mean | SD | Mean | SD | Mean | SD | Mean | SD | ||
| 9 | MAE | 0.267 | 0.077 | 0.269 | 0.061 | 0.279 | 0.071 | 0.296 | 0.071 |
| RMSE | 0.341 | 0.086 | 0.334 | 0.074 | 0.345 | 0.078 | 0.355 | 0.075 | |
| Correlation coefficient | 0.683 | 0.163 | 0.694 | 0.133 | 0.677 | 0.153 | 0.641 | 0.161 | |
| 15 | MAE | 0.267 | 0.077 | 0.272 | 0.077 | 0.291 | 0.071 | 0.301 | 0.061 |
| RMSE | 0.342 | 0.094 | 0.351 | 0.084 | 0.361 | 0.078 | 0.365 | 0.071 | |
| Correlation coefficient | 0.671 | 0.186 | 0.656 | 0.155 | 0.642 | 0.148 | 0.623 | 0.165 | |
| 17 | MAE | 0.265 | 0.071 | 0.281 | 0.081 | 0.281 | 0.069 | 0.305 | 0.062 |
| RMSE | 0.343 | 0.092 | 0.352 | 0.099 | 0.347 | 0.079 | 0.369 | 0.073 | |
| Correlation coefficient | 0.672 | 0.168 | 0.654 | 0.200 | 0.682 | 0.160 | 0.617 | 0.164 | |
FIGURE 9Results of the Pearson correlation coefficient from the testing set when nine factors were included by using the XGBoost regression model.