| Literature DB >> 35742067 |
Twin Yoshua R Destyanto1,2, Ray F Lin1.
Abstract
Recently, tools developed for detecting human activities have been quite prominent in contributing to health issue prevention and long-term healthcare. For this occasion, the current study aimed to evaluate the performance of eye-movement complexity features (from multi-scale entropy analysis) compared to eye-movement conventional features (from basic statistical measurements) on detecting daily computer activities, comprising reading an English scientific paper, watching an English movie-trailer video, and typing English sentences. A total of 150 students participated in these computer activities. The participants' eye movements were captured using a desktop eye-tracker (GP3 HD Gazepoint™ Canada) while performing the experimental tasks. The collected eye-movement data were then processed to obtain 56 conventional and 550 complexity features of eye movement. A statistic test, analysis of variance (ANOVA), was performed to screen these features, which resulted in 45 conventional and 379 complexity features. These eye-movement features with four combinations were used to build 12 AI models using Support Vector Machine, Decision Tree, and Random Forest (RF). The comparisons of the models showed the superiority of complexity features (85.34% of accuracy) compared to conventional features (66.98% of accuracy). Furthermore, screening eye-movement features using ANOVA enhances 2.29% of recognition accuracy. This study proves the superiority of eye-movement complexity features.Entities:
Keywords: complexity; eye-movement features; human activity recognition; multi-scale entropy
Year: 2022 PMID: 35742067 PMCID: PMC9222268 DOI: 10.3390/healthcare10061016
Source DB: PubMed Journal: Healthcare (Basel) ISSN: 2227-9032
Figure 1Apparatus and participant setting during the experiment.
Figure 2Experiment process from explanation to the third task.
The Architectures of AI Models.
| No. | ML Method | Special Parameters |
|---|---|---|
| 1 | SVM | Default |
| 2 | DT |
Max depth: 5; Min. samples leaf: 7 |
| 3 | RF |
Max depth: 5; n_estimators: 1000 |
Screened conventional eye-movement features using the ANOVA method from the eye-movement features.
| Features | Statistic | ||||||
|---|---|---|---|---|---|---|---|
| Mean | STD | Var | Median | Max | Min | Skew | |
| FPOGD | *** | *** | *** | *** | *** | ** | |
| LPD | *** | *** | *** | *** | *** | *** | |
| LPMM | *** | *** | *** | *** | *** | *** | * |
| RPD | *** | *** | *** | *** | *** | *** | |
| RPMM | *** | *** | *** | *** | *** | *** | *** |
| BKDUR | *** | *** | *** | *** | *** | *** | *** |
| BKPMIN | |||||||
| SAC_MAG | *** | *** | *** | *** | *** | *** | |
* indicates p-value < 0.05; ** indicates p-value < 0.01; *** indicates p-value < 0.001; grey-color shaded cell indicates p-value > 0.05.
Screened Eye-movement Complexity Features from CI of all IMF.
| #of IMF | Eye-Movement Complexity Features | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| FPOGD | FPOGX | FPOGY | LPCX | LPCY | LPD | RPD | LPMM | RPMM | |
| IMF 1 | *** | *** | *** | *** | *** | *** | *** | *** | |
| IMF 2 | * | *** | *** | *** | *** | *** | *** | *** | *** |
| IMF 3 | *** | *** | *** | *** | *** | *** | *** | *** | *** |
| IMF 4 | *** | *** | *** | *** | *** | ** | *** | *** | |
| IMF 5 | *** | ** | *** | ** | *** | *** | *** | ||
| IMF 6 | *** | *** | * | ** | |||||
* indicates p-value < 0.05; ** indicates p-value < 0.01; *** indicates p-value < 0.001; grey-color shaded cell indicates p-value > 0.05.
Figure 3Accuracy comparison of 6 types of ML models built using conventional eye-movement features. The number of used features for building the models is indicated by numbers in parentheses. Means that do not share a letter are significantly different.
Confusion matrices of the RF Models built both using all and important conventional eye-movement features.
| RF All Conventional | RF Important Conventional | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Mean ( | Predicted | Mean ( | Predicted | ||||||
| Reading | Typing | Watching | Reading | Typing | Watching | ||||
| Actual | Reading | 1.90 | 1.37 | 26.89 | Actual | Reading | 11.32 | 0.27 | 18.57 |
| Typing | 0.53 | 29.31 | 0.32 | Typing | 0.44 | 28.63 | 1.08 | ||
| Watching | 0.32 | 0.78 | 29.05 | Watching | 4.97 | 0.60 | 24.59 | ||
The accuracy degree is denoted by shading, with lighter hues indicative of lower accuracy and darker hues indicating higher accuracy (white indicates 0% and red represents 100%).
Figure 4Accuracy comparison of 6 types of ML models built using eye-movement complexity features. The number of used features for building the models is indicated by numbers in parentheses. Means that do not share a letter are significantly different.
Confusion matrices of the DT and RF Models were built both using all-important complexity features.
|
|
| ||||||||
|
|
|
|
| ||||||
|
|
|
|
|
|
| ||||
| Actual | Reading | 18.81 | 8.90 | 2.46 | Actual | Reading | 19.97 | 8.57 | 1.61 |
| Watching | 7.15 | 20.80 | 2.20 | Watching | 7.76 | 20.96 | 1.44 | ||
| Typing | 1.62 | 2.08 | 26.46 | Typing | 1.97 | 1.42 | 26.77 | ||
|
|
| ||||||||
|
|
|
|
| ||||||
|
|
|
|
|
|
| ||||
| Actual | Reading | 22.18 | 6.85 | 1.13 | Actual | Reading | 23.89 | 5.48 | 0.73 |
| Typing | 5.03 | 24.42 | 0.71 | Typing | 4.58 | 24.69 | 0.91 | ||
| Watching | 0.15 | 0.34 | 29.67 | Watching | 0.23 | 0.20 | 29.75 | ||
The accuracy degree is denoted by shading, with lighter hues indicative of lower accuracy and darker hues indicating higher accuracy (white indicates 0% and red represents 100%).
Figure 5Accuracy comparison of AI models that are built using conventional and eye-movement complexity features. Means that do not share a letter are significantly different.