| Literature DB >> 34069027 |
Shan Zhang1, Zihan Yan1,2, Shardul Sapkota1,3, Shengdong Zhao1, Wei Tsang Ooi4.
Abstract
While numerous studies have explored using various sensing techniques to measure attention states, moment-to-moment attention fluctuation measurement is unavailable. To bridge this gap, we applied a novel paradigm in psychology, the gradual-onset continuous performance task (gradCPT), to collect the ground truth of attention states. GradCPT allows for the precise labeling of attention fluctuation on an 800 ms time scale. We then developed a new technique for measuring continuous attention fluctuation, based on a machine learning approach that uses the spectral properties of EEG signals as the main features. We demonstrated that, even using a consumer grade EEG device, the detection accuracy of moment-to-moment attention fluctuations was 73.49%. Next, we empirically validated our technique in a video learning scenario and found that our technique match with the classification obtained through thought probes, with an average F1 score of 0.77. Our results suggest the effectiveness of using gradCPT as a ground truth labeling method and the feasibility of using consumer-grade EEG devices for continuous attention fluctuation detection.Entities:
Keywords: EEG; attention detection; machine learning; moment-to-moment; wearable
Mesh:
Year: 2021 PMID: 34069027 PMCID: PMC8156270 DOI: 10.3390/s21103419
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Summary of the recent works in attention state classification using automatic physiological data.
| Sensors | Attention States | Attention State Labeling Method | Time Scale of Ground Truth | Classifier | Result |
|---|---|---|---|---|---|
| Thermal image | Sustained attention | Controlled tasks | 3 min each task | Logistic Regression | 75.7% AUC score for |
| Alternating attention | 87% AUC score for | ||||
| Selective attention | 77.4% AUC score | ||||
| Divided attention | |||||
| EDA [ | Engaged | Self-report questionnaires | 45 min each questionnaire | SVM | 0.60 for accuracy |
| Not engaged | |||||
| PPG [ | Full Attention (FA) | Designed tasks based on | 8 min each task | RBF-SVM classifiers | 50% for FA vs. EDA vs. LIDA vs. HIDA |
| Low internal | 72.2% for FA vs. EDA | ||||
| High internal | 75.0% for FA vs. LIDA | ||||
| External divided | 83.3% for FA vs. HIDA |
Figure 1(a) An illustration of three continuous trials of the gradCPT. (b) A division of “in the zone” and “out of the zone” states based on the participant’s RTV over 10 min.
Figure 2The experiment setup—a participant wearing the EEG device, engaged in gradCPT.
Figure 3Preprocessing procedure (a) a 2 s raw EEG signal segment example where the participant had a natural blink; (b) normalized signal; (c) EOG detection on the normalized signal; (d) five frequency bands after bandpass filtering.
Figure 4Model of classification.
Features computed for theta/alpha/beta/gamma/delta band and descriptions.
| Feature | Description |
|---|---|
| Approx. Entropy | Approximate entropy of the signal |
| Total variation | Sum of gradients in the signal |
| Standard variation | Standard deviation of the signal |
| Energy | Sum of squares of the signal |
| Skewness | Sample skewness of the signal |
Figure 5Feature extraction process.
Figure 6Accuracy of 18 participants, P1–P18.
Figure 7Mean correlation coefficient for different (a) frequency band and sliding window shift; (b) feature type and sliding window shift; (c) feature type and frequency band.
Figure 8F1 score of 24 participants, P1–P24.
Figure 9Raw EEG signal, predicted attention states, topics mentioned in the video and video timeline, all from the 400 s time segment during P8’s video viewing. The blue spikes in the raw EEG signal are eye blinks which would be removed following Section 3.2.2 before attention prediction.