| Literature DB >> 35885434 |
Gemma S Parra-Dominguez1, Carlos H Garcia-Capulin1, Raul E Sanchez-Yanez1.
Abstract
The incapability to move the facial muscles is known as facial palsy, and it affects various abilities of the patient, for example, performing facial expressions. Recently, automatic approaches aiming to diagnose facial palsy using images and machine learning algorithms have emerged, focusing on providing an objective evaluation of the paralysis severity. This research proposes an approach to analyze and assess the lesion severity as a classification problem with three levels: healthy, slight, and strong palsy. The method explores the use of regional information, meaning that only certain areas of the face are of interest. Experiments carrying on multi-class classification tasks are performed using four different classifiers to validate a set of proposed hand-crafted features. After a set of experiments using this methodology on available image databases, great results are revealed (up to 95.61% of correct detection of palsy patients and 95.58% of correct assessment of the severity level). This perspective leads us to believe that the analysis of facial paralysis is possible with partial occlusions if face detection is accomplished and facial features are obtained adequately. The results also show that our methodology is suited to operate with other databases while attaining high performance, even though the image conditions are different and the participants do not perform equivalent facial expressions.Entities:
Keywords: clinical decision support systems; computerized assessment; facial palsy detection; facial paralysis diagnose; machine learning; medical diagnosis; medical screening; severity grading
Year: 2022 PMID: 35885434 PMCID: PMC9317944 DOI: 10.3390/diagnostics12071528
Source DB: PubMed Journal: Diagnostics (Basel) ISSN: 2075-4418
Figure 1Framework of the proposed facial palsy assessment system.
Figure 2(a) The 51 key points inspired by the model proposed by Matthews and Baker [11]; (b–d) Facial distances to obtain spatial relations between facial landmarks [6].
Facial symmetry features, introduced by Parra-Dominguez et al. [6].
| No. | Facial Region | Type | Formula |
|---|---|---|---|
|
| Eyebrows | Angle |
|
|
| Eyebrows | Angle |
|
|
| Eyebrows | Angle |
|
|
| Eyebrows | Max. |
|
|
| Eyebrows | Slope |
|
|
| Eyebrows | Slope |
|
|
| Eyebrows | Slope |
|
|
| Eyes | Angle |
|
|
| Eyes | Max. |
|
|
| Eyes | Max. |
|
|
| Eyes | Max. |
|
|
| Eyes | Max. |
|
|
| Eyes | Max. |
|
|
| Eyes | Max. |
|
|
| Mouth | Angle |
|
|
| Mouth | Max. |
|
|
| Mouth | Max. |
|
|
| Mouth | Max. |
|
|
| Mouth | Max. |
|
|
| Mouth | Max. |
|
|
| Mouth | Max. |
|
|
| Mouth | Max. |
|
|
| Nose | Angle |
|
|
| Combined | Angle |
|
|
| Combined | Max. |
|
|
| Combined | Max. |
|
|
| Combined | Max. |
|
|
| Combined | Ratio |
|
|
| Combined | Ratio |
|
In f3, L and M are the average height of all the left and right eyebrow points, respectively. In f11, N = (N + N)/2, similarly, O = (O + O)/2. In f19, f20, and f21, W is the distance shown in Figure 2d, and the perimeter values W and W are computed as W = (P28, P29, P30, P31, P37, P38, P39) and W = (P31, P32, P33, P34, P35, P36, P37).
Figure 3Example of a face image divided into four facial regions.
Required parameters to operate the classifier in the Weka suite, according to [34].
| Method | Parameters | Weka Function |
|---|---|---|
| MLP | Learning rate (L), momentum (M), training time (N), number of neurons in the hidden layers (H), and seed (S) | MultilayerPerceptron |
| SVM | Cost (C), gamma (G), kernel type | LibSVM |
| KNN | Number of neighbors (KNN) and distance function (A) | IBk |
| MNLR | Ridge (R) | Logistic |
In Weka, the parameter N of MLP refers to the number of epochs to train through and the nodes in the network are all sigmoid. Additionally, the radial basis function (RBF) kernel was used in all experiments using SVM and only one neighbor was set for the KNN classifier.
First experiment: classifiers’ configuration.
| Method | Parameters |
|---|---|
| MLP | |
| SVM | |
| KNN | KNN |
| MNLR |
|
* Refers to the gamma value for the face, eyes, and mouth evaluation, respectively.
Results of the detection of palsy regions on the criteria of accuracy.
| Classifier | Face | Eyes | Mouth |
|---|---|---|---|
| MLP |
|
|
|
| SVM |
|
|
|
| KNN |
|
|
|
| MNLR |
|
|
|
Figure 4Confusion matrix of the detection of the palsy: (a) on the entire face, (b) on the eyes region and (c) on the mouth region.
Performance results for the detection of palsy regions.
| Region | Classifier | TNR | FNR | TPR | FPR |
|---|---|---|---|---|---|
| Face | SVM |
|
|
|
|
| Eyes | SVM |
|
|
|
|
| Mouth | MLP |
|
|
|
|
Data distribution for the prediction of two palsy levels.
| Test | Total of Images | Data Distribution |
|---|---|---|
| Eyes region | 680 | Original data: 208 low-intensity and 472 high-intensity samples |
| 2040 | Augmented data: 624 low-intensity and 1416 high-intensity samples | |
| Mouth region | 680 | Original data: 141 low-intensity and 539 high-intensity samples |
| 2040 | Augmented data: 423 low-intensity and 1617 high-intensity samples |
Second experiment: classifiers’ configuration for the first test.
| Method | Parameters |
|---|---|
| MLP | |
| SVM | |
| KNN | KNN |
| MNLR |
|
* Refers to the values for the eyes and mouth evaluation, respectively.
Results on the prediction of two palsy levels on the criteria of accuracy using the 29 symmetry features.
| Classifier | Eyes | Mouth |
|---|---|---|
| MLP |
|
|
| SVM |
|
|
| KNN |
|
|
| MNLR |
|
|
Second experiment: classifiers’ configuration for the second test.
| Method | Parameters |
|---|---|
| MLP | |
| SVM | |
| KNN | KNN |
| MNLR |
|
* Refers to the values for the eyes and mouth evaluation, respectively.
Results on the prediction of two palsy levels on the criteria of accuracy using regional information.
| Classifier | Eyes | Mouth |
|---|---|---|
| MLP |
|
|
| SVM |
|
|
| KNN |
|
|
| MNLR |
|
|
Figure 5Facial analysis: (a) healthy eyes, (b) slight palsy and (c) strong palsy eyes; (d) healthy mouth, (e) slight palsy, and (f) strong palsy mouth. Palsy images were obtained from [9].
Data distribution for the prediction of three palsy levels.
| Test | Total of Images | Data Distribution |
|---|---|---|
| Eyes region | 1420 | Original data: 740 healthy, 208 low-intensity and 472 high-intensity samples |
| 4260 | Augmented data: 2220 healthy, 624 low-intensity and 1416 high-intensity samples | |
| Mouth region | 1420 | Original data: 740 healthy, 141 low-intensity and 539 high-intensity samples |
| 4260 | Augmented data: 2220 healthy, 423 low-intensity and 1617 high-intensity samples |
Third experiment: classifiers’ configuration for the first test.
| Method | Parameters |
|---|---|
| MLP | |
| SVM | |
| KNN | KNN |
| MNLR |
|
* Refers to the values for the eyes and mouth evaluation, respectively.
Results on the palsy lesion assessment on the criteria of accuracy using the 29 symmetry features.
| Classifier | Eyes | Mouth |
|---|---|---|
| MLP |
|
|
| SVM |
|
|
| KNN |
|
|
| MNLR |
|
|
Third experiment: classifiers’ configuration for the second test.
| Method | Parameters |
|---|---|
| MLP | |
| SVM | |
| KNN | KNN |
| MNLR |
|
* Refers to the values for the eyes and mouth evaluation, respectively.
Results on the palsy lesion assessment on the criteria of accuracy using regional information.
| Classifier | Eyes | Mouth |
|---|---|---|
| MLP |
|
|
| SVM |
|
|
| KNN |
|
|
| MNLR |
|
|