| Literature DB >> 32365724 |
Iuliia Brishtel1,2, Anam Ahmad Khan3, Thomas Schmidt4, Tilman Dingler3, Shoya Ishimaru1,2, Andreas Dengel1,2.
Abstract
Mind wandering is a drift of attention away from the physical world and towards our thoughts and concerns. Mind wandering affects our cognitive state in ways that can foster creativity but hinder productivity. In the context of learning, mind wandering is primarily associated with lower performance. This study has two goals. First, we investigate the effects of text semantics and music on the frequency and type of mind wandering. Second, using eye-tracking and electrodermal features, we propose a novel technique for automatic, user-independent detection of mind wandering. We find that mind wandering was most frequent in texts for which readers had high expertise and that were combined with sad music. Furthermore, a significant increase in task-related thoughts was observed for texts for which readers had little prior knowledge. A Random Forest classification model yielded an F 1 -Score of 0.78 when using only electrodermal features to detect mind wandering, of 0.80 when using only eye-movement features, and of 0.83 when using both. Our findings pave the way for building applications which automatically detect events of mind wandering during reading.Entities:
Keywords: attention-aware systems; electrodermal activity; eye tracking; meta-awareness; mind wandering; reading
Mesh:
Year: 2020 PMID: 32365724 PMCID: PMC7248717 DOI: 10.3390/s20092546
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Experimental Design: Experimental and Control Conditions with a Total Sample Size.
| 3 × 3 | Music Type | |||
|---|---|---|---|---|
| Sad | Happy | No-Music | ||
| Psychology | 80 | 80 | 160 | |
|
| Computer Science | 80 | 80 | 160 |
| Random Topic | 80 | 80 | 160 | |
Figure 1A participant performing the reading task. We collected eye movements using an eye-tracker Tobii 4C, EDA using a wristband Empatica E4, as well as occurrences of mind wandering through self-reports.
Analysis of Variance. Bold font represents main and interaction effects.
| Source | df | F |
|
|---|---|---|---|
|
| |||
|
| 1.9 | 79.7 ** | 0.01 |
| C vs P | 1.0 | 64.3 ** | 0.01 |
| RT vs (C+P) | 1.0 | 94.3 ** | 0.01 |
|
| |||
|
| 1.6 | 3.5 * | 0.05 |
| S vs H | 1.0 | 0.4 | 0.52 |
| (S + H) vs NM | 1.0 | 10.3 ** | 0.01 |
|
| 1.7 | 1.2 | 0.30 |
|
| 2.7 | 2.8 * | 0.05 |
| S CS vs (S RT + S P) | 1.0 | 6.6 * | 0.02 |
| H RT vs (H CS + H P) | 1.0 | 5.1 * | 0.04 |
|
| |||
|
| 1.7 | 4.3 * | 0.03 |
| R vs (C + P) | 1.0 | 6.1 * | 0.02 |
|
| 1.7 | 4.3 | 0.11 |
|
| 2.8 | 1.2 | 0.31 |
|
| |||
|
| 1.9 | 0.1 | 0.87 |
|
| 1.4 | 0.6 | 0.48 |
|
| 3.1 | 0.8 | 0.53 |
PAR: Personal/Academic Relevance. MW: Mind Wandering. NM: No-Music, S: Sad, H: Happy. C: Computer Science. P: Psychology. RT: Random Topic. * p < 0.05, ** p < 0.01.
Descriptive Statistics.
| Condition | Mind Wandering M(SD) | TRTs M(SD) |
|---|---|---|
| Sad Computer Science | 0.34(0.38) | 0.77(0.88) |
| Sad Psychology | 0.18(0.26) | 0.87(0.98) |
| Sad Random Topic | 0.19(0.27) | 0.91(1.05) |
| Happy Computer Science | 0.22(0.35) | 0.47(0.67) |
| Happy Psychology | 0.25(0.42) | 0.53(0.76) |
| Happy Random Topic | 0.39(0.52) | 1.01(1.06) |
| Baseline Computer Science | 0.12(0.18) | 0.60(0.72) |
| Baseline Psychology | 0.08(0.11) | 0.60(0.75) |
| Baseline Random Topic | 0.15(0.14) | 0.87(0.92) |
Figure 2Eye movements during the reading task (fixation points in blue and regression points in red). Top: Paragraph with reported mind wandering, bottom: paragraph with focused reading behaviour.
Eye tracker features. For all features mean was calculated, for bolded features min, max values were additionally calculated.
| Features | Description |
|---|---|
|
| Duration of a fixation point in milliseconds |
|
| Diameter of pupil in pixels (z-score) |
|
| Distance in pixels between two subsequent fixations |
|
| Transition between two subsequent fixations in milliseconds |
|
| Angle in radians between |
|
| Backward transition between two fixation points in pixels |
| Number of regressions | Total number of regressions within one paragraph |
| Number of fixations | Total number of fixation points within one paragraph |
| Number of saccades | Total number of saccades within one paragraph |
Figure A1Example of Convex Optimization Approach for EDA Decomposition. (Top left): EDA signal (z-score). (Top right): Tonic component. (Bottom left): Phasic component. (Bottom right): Sparse component (phasic driver).
In this table we report the results for different classifiers and feature sets. The numbers are reported in percentages. As seen in the table, all random forest-based classifiers achieved the highest -Score. The combination of Eye and EDA Features achieved the highest classification accuracy.
| Classifier | Feature Type | Kappa | Accuracy | AUC | Presicion | Recall | |
|---|---|---|---|---|---|---|---|
| Eye | 0.25(0.21) | 0.72(0.14) | 0.70(0.12) | 0.75(0.12) | 0.85(0.11) | 0.72(0.15) | |
|
| EDA | 0.23(0.28) | 0.70(0.17) | 0.66(0.19) | 0.73(0.16) | 0.84(0.12) | 0.70(0.18) |
|
| Eye + EDA | 0.26(0.22) | 0.73(0.13) | 0.71(0.16) | 0.76(0.11) | 0.85(0.10) | 0.73(0.13) |
| Eye + EDA + Behavior | 0.31(0.27) | 0.76(0.13) | 0.72(0.17) | 0.79(0.10) | 0.86(0.09) | 0.76(0.13) | |
| Eye | 0.25(0.21) | 0.80(0.09) | 0.66(0.12) | 0.80(0.09) | 0.83(0.12) | 0.80(0.09) | |
|
| EDA | 0.15(0.15) | 0.83(0.08) | 0.62(0.13) | 0.78(0.08) | 0.82(0.09) | 0.77(0.08) |
|
| Eye + EDA | 0.29(0.27) | 0.83(0.08) | 0.65(0.16) | 0.83(0.08) | 0.84(0.11) | 0.83(0.08) |
| Eye + EDA + Behavior | 0.31(0.27) | 0.76(0.13) | 0.69(0.15) | 0.82(0.09) | 0.86(0.10) | 0.82(0.10) | |
| Eye | 0.26(0.24) | 0.78(0.13) | 0.68(0.13) | 0.78(0.11) | 0.85(0.10) | 0.77(0.13) | |
|
| EDA | 0.26(0.23) | 0.73(0.14) | 0.67(0.16) | 0.76(0.12) | 0.86(0.09) | 0.73(0.14) |
| Eye + EDA | 0.37(0.27) | 0.79(0.15) | 0.73(0.17) | 0.80(0.12) | 0.87(0.10) | 0.79(0.15) | |
| Eye + EDA + Behavior | 0.41(0.28) | 0.80(0.14) | 0.77(0.14) | 0.82(0.07) | 0.88(0.09) | 0.80(0.14) |
Figure 3Feature importance graph for the Random Forest classification models usig the SHAP method. Top left: Eye-based model. Top right: EDA-based model. Bottom left: Eye and EDA-based model. Bottom right: Sensory and Behavior-based model.