| Literature DB >> 35310212 |
Yu Guo1,2, Xiaoqian Liu1,2, Xiaoyang Wang1,2, Tingshao Zhu1,2, Wei Zhan3.
Abstract
In recent years, somatosensory interaction technology, represented by Microsoft's Kinect hardware platform, has been widely used in various fields, such as entertainment, education, and medicine. Kinect technology can easily capture and record behavioral data, which provides new opportunities for behavioral and psychological correlation analysis research. In this paper, an automatic decision-style recognition method is proposed. Experiments involving 240 subjects were conducted to obtain face data and individual decision-making style score. The face data was obtained using the Kinect camera, and the decision-style score were obtained via a questionnaire. To realize automatic recognition of an individual decision-making style, machine learning was employed to establish the mapping relationship between the face data and a scaled evaluation of the decision-making style score. This study adopts a variety of classical machine learning algorithms, including Linear regression, Support vector machine regression, Ridge regression, and Bayesian ridge regression. The experimental results show that the linear regression model returns the best results. The correlation coefficient between the linear regression model evaluation results and the scale evaluation results was 0.6, which represents a medium and higher correlation. The results verify the feasibility of automatic decision-making style recognition method based on facial analysis.Entities:
Keywords: Kinect; decision-making style; face data; linear regression; machine learning
Year: 2022 PMID: 35310212 PMCID: PMC8931824 DOI: 10.3389/fpsyg.2022.751914
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
Demographic information of subjects.
| Demographic Information |
|
|
|
| ||
| Female | 130 | 54.20 |
| Male | 110 | 45.80 |
|
| ||
| Single | 138 | 57.50 |
| Partnered | 96 | 40.00 |
| Married | 4 | 1.67 |
| Divorced | 2 | 0.83 |
|
| ||
| High school | 2 | 0.83 |
| In college | 35 | 14.60 |
| University/higher vocational | 36 | 15.00 |
| In graduate school | 152 | 63.30 |
| Postgraduate | 15 | 6.25 |
|
| ||
| Unknown | 9 | 3.75 |
| <1 | 11 | 4.58 |
| 1–5 | 50 | 20.80 |
| 5–10 | 84 | 35.00 |
| 10–30 | 76 | 31.67 |
| 30–60 | 8 | 3.33 |
| >60 | 2 | 0.83 |
N = 240. Subjects were on average 39.5 years old (SD = 10.1), and subjects age did not differ by condition.
FIGURE 1A Kinect face image with 1347 facial recognition points.
Average score of GDMS scale in five dimensions.
| Gender | Spontaneous | Avoidant | Rational | Dependent | Intuition | |||||
|
|
|
|
|
|
|
|
|
|
| |
| Female | 17.67 | 4.60 | 18.06 | 2.61 | 16.71 | 4.33 | 15.21 | 4.02 | 18.49 | 3.53 |
| Male | 15.66 | 4.52 | 19.90 | 3.13 | 15.20 | 4.12 | 17.00 | 4.86 | 17.05 | 3.62 |
Correlation coefficient between the predicted and actual values of decision-making style of the four algorithms models.
| Algorithms | Spontaneous | Avoidant | Rational | Dependent | Intuition |
|
|
|
| |||||||
| 0.60 | 0.44 | 0.45 | 0.57 | 0.40 | 0.49 | 0.078 | |
| 0.45 | 0.44 | 0.50 | 0.61 | 0.41 | 0.48 | 0.070 | |
|
| |||||||
| 0.41 | 0.27 | 0.27 | 0.41 | 0.39 | 0.35 | 0.066 | |
| 0.38 | 0.27 | 0.28 | 0.43 | 0.38 | 0.35 | 0.062 | |
|
| |||||||
| 0.48 | 0.37 | 0.45 | 0.52 | 0.40 | 0.44 | 0.054 | |
| 0.45 | 0.41 | 0.44 | 0.53 | 0.43 | 0.45 | 0.041 | |
|
| |||||||
| 0.42 | 0.34 | 0.48 | 0.53 | 0.38 | 0.43 | 0.068 | |
| 0.42 | 0.41 | 0.51 | 0.52 | 0.33 | 0.44 | 0.070 | |
|
| |||||||
| 0.48 | 0.36 | 0.41 | 0.50 | 0.39 | |||
| 0.43 | 0.38 | 0.43 | 0.52 | 0.39 | |||
|
| |||||||
| 0.076 | 0.061 | 0.083 | 0.059 | 0.008 | |||
| 0.027 | 0.044 | 0.081 | 0.049 | 0.007 |
In the column 1, W refer to the size of the sliding window used to eliminate noise.
*p-value < 0.005.
**p-value < 0.001.
Root mean square error of the four algorithms.
| Algorithms | Spontaneous | Avoidant | Rational | Dependent | Intuition |
|
| |||||
| 4.23 | 5.14 | 4.73 | 3.16 | 4.58 | |
| 3.67 | 5.21 | 4.53 | 3.02 | 4.81 | |
|
| |||||
| 3.54 | 5.43 | 4.38 | 3.35 | 5.36 | |
| 3.58 | 5.6 | 4.44 | 3.29 | 5.49 | |
|
| |||||
| 3.50 | 5.34 | 3.92 | 3.17 | 5.13 | |
| 3.53 | 5.33 | 3.90 | 2.87 | 5.11 | |
|
| |||||
| 3.48 | 5.33 | 4.60 | 3.20 | 5.35 | |
| 3.50 | 5.20 | 4.60 | 3.11 | 5.35 |
Split-half reliability of the four algorithms.
| Algorithms | Spontaneous | Avoidant | Rational | Dependent | Intuition |
|
| |||||
| 0.70 | 0.73 | 0.70 | 0.76 | 0.69 | |
| 0.68 | 0.77 | 0.73 | 0.71 | 0.68 | |
|
| |||||
| 0.41 | 0.24 | 0.49 | 0.20 | 0.23 | |
| 0.47 | 0.33 | 0.47 | 0.51 | 0.20 | |
|
| |||||
| 0.63 | 0.39 | 0.76 | 0.39 | 0.42 | |
| 0.61 | 0.45 | 0.72 | 0.43 | 0.52 | |
|
| |||||
| 0.54 | 0.69 | 0.69 | 0.49 | 0.59 | |
| 0.53 | 0.62 | 0.71 | 0.53 | 0.61 |
In column 1, W refer to the size of the sliding window used to eliminate noise.
*p-value < 0.005.
**p-value < 0.001.
FIGURE 2The Specific information of 36 key facial points. In the enumeration type, for each element, the variable name on the left side of the equal sign represents the position of the key point on the face, and the value on the right side of the equal sign represents the ID of this point among all 1347 facial points.