| Literature DB >> 34007342 |
Khawlah Altuwairqi1, Salma Kammoun Jarraya1,2, Arwa Allinjawi1, Mohamed Hammami2,3.
Abstract
After the COVID-19 pandemic, no one refutes the importance of smart online learning systems in the educational process. Measuring student engagement is a crucial step towards smart online learning systems. A smart online learning system can automatically adapt to learners' emotions and provide feedback about their motivations. In the last few decades, online learning environments have generated tremendous interest among researchers in computer-based education. The challenge that researchers face is how to measure student engagement based on their emotions. There has been an increasing interest towards computer vision and camera-based solutions as technology that overcomes the limits of both human observations and expensive equipment used to measure student engagement. Several solutions have been proposed to measure student engagement, but few are behavior-based approaches. In response to these issues, in this paper, we propose a new automatic multimodal approach to measure student engagement levels in real time. Thus, to offer robust and accurate student engagement measures, we combine and analyze three modalities representing students' behaviors: emotions from facial expressions, keyboard keystrokes, and mouse movements. Such a solution operates in real time while providing the exact level of engagement and using the least expensive equipment possible. We validate the proposed multimodal approach through three main experiments, namely single, dual, and multimodal research modalities in novel engagement datasets. In fact, we build new and realistic student engagement datasets to validate our contributions. We record the highest accuracy value (95.23%) for the multimodal approach and the lowest value of "0.04" for mean square error (MSE).Entities:
Keywords: Academic facial emotions; Affective model; Convolutional neural network (CNN); Engagement level; Keyboard and mouse behaviors
Year: 2021 PMID: 34007342 PMCID: PMC8119613 DOI: 10.1007/s11760-021-01869-7
Source DB: PubMed Journal: Signal Image Video Process ISSN: 1863-1703 Impact factor: 2.157
Fig. 1The proposed framework of multimodal approach to recognize student engagement levels
Evaluation of single, dual, multimodal engagement level detection
| Modality name | Modality type | Accu (%) | MSE |
|---|---|---|---|
| Face emotion | Single | 76.19 | 0.23 |
| Mouse behavior | 40.47 | 0.52 | |
| Keyboard behavior | 28.57 | 1.07 | |
| Face emotion and mouse behavior | Dual | 90.47 | 0.095 |
| Face emotion and keyboard behavior | 80.95 | 0.14 | |
| Mouse behavior and keyboard behavior | 42.85 | 0.85 | |
Comparison between the engagement level of our proposed model and the state-of-the-art methods
| Work | Modality | Classifier | Engagement level | Tasks | Dataset description | Engagement level accuracy (%) |
|---|---|---|---|---|---|---|
| [ | Emotions from facial expressions (one) | SVM | Very engaged, engaged in the task, nominally engaged, not engaged at all | Set game | 34 volunteers | 72.9 |
| Each volunteer sat in a private room | ||||||
| one session | ||||||
| [ | Emotions from facial expressions, eye gazes, and mouse behaviors (three) | SVM | High, medium, or low attention | Reading task | 6 volunteers | 75.5 |
| Three sessions with quiet and noisy environments | ||||||