| Literature DB >> 33266131 |
Su Mu1, Meng Cui1, Xiaodi Huang2.
Abstract
Multimodal learning analytics (MMLA), which has become increasingly popular, can help provide an accurate understanding of learning processes. However, it is still unclear how multimodal data is integrated into MMLA. By following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, this paper systematically surveys 346 articles on MMLA published during the past three years. For this purpose, we first present a conceptual model for reviewing these articles from three dimensions: data types, learning indicators, and data fusion. Based on this model, we then answer the following questions: 1. What types of data and learning indicators are used in MMLA, together with their relationships; and 2. What are the classifications of the data fusion methods in MMLA. Finally, we point out the key stages in data fusion and the future research direction in MMLA. Our main findings from this review are (a) The data in MMLA are classified into digital data, physical data, physiological data, psychometric data, and environment data; (b) The learning indicators are behavior, cognition, emotion, collaboration, and engagement; (c) The relationships between multimodal data and learning indicators are one-to-one, one-to-any, and many-to-one. The complex relationships between multimodal data and learning indicators are the key for data fusion; (d) The main data fusion methods in MMLA are many-to-one, many-to-many and multiple validations among multimodal data; and (e) Multimodal data fusion can be characterized by the multimodality of data, multi-dimension of indicators, and diversity of methods.Entities:
Keywords: data fusion; learning indicators; multimodal data; multimodal learning analytics; online learning
Year: 2020 PMID: 33266131 PMCID: PMC7729570 DOI: 10.3390/s20236856
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Flow diagram based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines.
Inclusion and Exclusion Criteria for Reviewing Papers.
| Inclusion Criteria | Exclusion Criteria |
|---|---|
| The following search keywords are included in the title, abstract, or keywords |
Studies published before 2017 |
|
“Multimodal Learning Analytics” OR |
Duplicate papers (only one paper included) |
|
“MMLA” OR |
Articles unrelated to MMLA content |
|
“Learning analytics” and “multimodal” |
Non-English papers |
|
Not Peer-Reviewed |
Scoring rules.
| Scoring Rules | Score | RQ | |||
|---|---|---|---|---|---|
| Title and Abstract | The topic has nothing with MMLA | (Score = 0) | |||
| The topic has a little with MMLA | (Score = 1~2) | ||||
| MMLA | (Score = 3~6) | Q1 & Q2 & Q3 | |||
| Full-text | 3.1 Only mentioned MMLA | (Score = 3) | Q1 | ||
| 3.2 Non-empirical study on MMLA, such as its review and theory | (Score = 4) | Q1 | |||
| 3.3 An empirical study on MMLA | (Score = 5~6) | Q2 | |||
| Without Data Fusion | (Score = 5) | ||||
| With Data Fusion | (Score = 6) | Q3 | |||
Multimodal data classification and case studies.
| Type | Multimodal Data and Code | Case Studies | Author |
|---|---|---|---|
|
| |||
| Clickstream | Log data | Log data as a proxy measure of student engagement | [ |
| Interactions in STEAM by a physical computing platform | [ | ||
| Mouse | Behavioral engagement detection of students | [ | |
| Keystrokes | Surrogate measure for the effort put in by the student | [ | |
| Qualitative data | Text | Learners’ emotions from pedagogical texts | [ |
| Handwriting | Dynamic handwriting signal to predict domain expertise | [ | |
| A sensitive measure of handwriting performance | [ | ||
| Digital footnote | Analyzing students’ reviewing behavior | [ | |
|
| |||
| Eye | Eye movement | Students/teacher co-attention (i.e., with-me-ness) | [ |
| Improving communication between pair programmers | [ | ||
| Eye Contact | Joint Visual Attention | [ | |
| Eye contact in three-party conversations | [ | ||
| Mouth | Audio | Exploring collaborative writing of user stories | [ |
| Think-aloud protocols used in cognitive and metacognitive activities | [ | ||
| Face | Facial Expression | Investigating emotional variation during interaction | [ |
| Automated detection of engagement | [ | ||
| Facial Region | behaviors of lecturers and students | [ | |
| Student behavior monitoring systems | [ | ||
| facial temperature | Assess the effect of different levels of cognitive load on facial temperature | [ | |
| Head | Head Region | behavioral engagement detection of students | [ |
| Modeling collaborative problem-solving competence | [ | ||
| Hand | Hand | data glove which captures pressure sensitivity designed to provide feedback for palpation tasks | [ |
| Using hand motion to understand embodied mathematical learning | [ | ||
| Arms | Arms | Dynamic adaptive gesturing predicts domain expertise in mathematics | [ |
| Embodied learning behavior in the mathematics curriculum | [ | ||
| Leg | step count | Step counts are used to predicting learning performance in ubiquitous learning | [ |
| Body | Body posture | Enhancing multimodal learning through personalized gesture recognition | [ |
| Embodied strategies in the teaching and learning of science | [ | ||
| Body Movement and Location | Making spatial pedagogy visible using positioning sensors | [ | |
| Tracing students’ physical movement during practice-based learning | [ | ||
| Orientation | Aggregating positioning and orientation in the visualization of classroom proxemics | [ | |
|
| |||
| Brain | Electroencephalogram | Detecting cognitive load using EEG during learning | [ |
| Multimodal emotion recognition | [ | ||
| Skin | electrodermal activity | Profiling sympathetic arousal in a physics course | [ |
| galvanic skin response | The difficulty of learning materials | [ | |
| skin temperature | Recognition of emotions | [ | |
| Heart | Electrocardiogram | EDA and ECG study of pair-programming in a classroom environment | [ |
| Multimodal emotion recognition | [ | ||
| Photoplethysmography | Recognition of emotions | [ | |
| heart rate /variability | Automated detection of engagement | [ | |
| Blood | blood volume pulse | Recognition of emotions | [ |
| Lung | Breathe respiration | Recognition of emotions | [ |
|
| |||
| Motivation | Motivation coming from the questionnaire | [ | |
|
| |||
| Weather condition | Predicting performance in self-regulated learning using multimodal data, such as (1) Temperature, (2) Pressure, (3) Precipitation, (4) Weather type | [ | |
Scoring results.
| Score | Num. of Articles | Percentage | Remarks |
|---|---|---|---|
| 3 | 47 | 3.36% | Only Mention MMLA |
| 4 | 110 | 35.26% | Non-empirical study on MMLA |
| 5 | 77 | 24.68% | An empirical study on MMLA BUT without Data Fusion |
| 6 | 112 | 37.90% | An empirical study on MMLA AND Data Fusion |
Figure 2A conceptual model of multimodal data analysis.
Figure 3The classification framework of the multimodal data.
The relationships between multimodal data and learning indicators.
| Learning Indicator | Behavior | Attention | Cognition Metacognition | Emotion | Collaboration | Engagement | Learning Performance | |
|---|---|---|---|---|---|---|---|---|
| Multimodal Data | ||||||||
| Digital space | [ | [ | [ | [ | [ | [ | ||
| Physical space | [ | [ | [ | [ | [ | [ | [ | |
| Physiological space | [ | [ | [ | [ | [ | |||
| Physiological space | [ | [ | [ | [ | [ | |||
Data integration in multimodal learning analytics (MMLA).
| Integration Methods | Data Type | Learning Indicators | Author |
|---|---|---|---|
| Many-to-One | FE, PPG | Emotion | [ |
| AU, FA, LOG, HA | Learning performance | [ | |
| LOG, AU, BL, SR | Collaboration | [ | |
| PS, AU | Emotion | [ | |
| AU, FE, BL, EDA, VO | Collaboration, engagement, learning performance | [ | |
| EM, AU, VB, MP | Teaching behavior | [ | |
| FE, HER, EM | Engagement | [ | |
| FR, HER, BL | Engagement | [ | |
| AR, HER, FR | Collaboration | [ | |
| FE, EM, EEG, EDA, BVP, HR, TEMP | Engagement | [ | |
| AU, LOG | Collaboration | [ | |
| AU, LOG | Emotion | [ | |
| AU, VB | Engagement | [ | |
| FR, MO, LOG | Engagement | [ | |
| FE, HR, LBP-TOP | Engagement | [ | |
| AU, LOG, BL | Oral presentations | [ | |
| PS, AU, FE | Emotion | [ | |
| EM, EEG | Emotion | [ | |
| AU, FE, ECG, EDA | Emotion | [ | |
| VB, LOG | Cognition | [ | |
| FE, HER, LOG | Engagement | [ | |
| SC, LOG, HR, EN | Learning performance | [ | |
| HER, LOG | Engagement | [ | |
| PE, PS, AU, FE, BL, EM, EEG, BVP, GSA | Learning performance | [ | |
| GSR, ST, HR, HRV, PD | Cognitive load | [ | |
| AU, EM, LOG | Dialogue failure in human-computer interaction | [ | |
| AU, HAR, FR | Collaboration | [ | |
| HAR, EC, FR | Engagement | [ | |
| BL, MP, LOG | Attention | [ | |
| AU, FE, EM, LOG | Collaboration | [ | |
| EEG, EOG, ST, GSR, BVP | Emotion | [ | |
| AU, EC, AR, MP | Oral presentations | [ | |
| AU, BL, LOG | Embodied learning behavior | [ | |
| Many-to-Many | FE, BL, AU, EC | Oral presentations | [ |
| BL, AU | Collaboration | [ | |
| MP, AU, LOG, EDA, PS | Medical operation skills | [ | |
| BL, EMG, LOG | Medical operation skills | [ | |
| AU, EM, MP, BL | Embodied learning behavior | [ | |
| FA, EC, MP | Face-to-face classroom | [ | |
| AU, HER, HA, AR, MP | Oral presentations | [ | |
| FE, HER, AR, LE, MP | Dancing skills | [ | |
| FA, EDA, HR | - | [ | |
| AU, MP, BL, LOG | Oral presentations | [ | |
| EM, EEG | Attention, cognition | [ | |
| - | Dancing skills | [ | |
| AU, BL, MP, LOG | - | [ | |
| EM, EEG | Adaptive self-assessment activity | [ | |
| AU, VB, LOG | - | [ | |
| EM, LOG | Open-ended learning environments | [ | |
| BL, EC, AU, LOG | Oral presentations | [ | |
| MP, FE, AU | Oral presentations | [ | |
| Mutual Verification between multimodal data | VO, FE, EDA | Collaboration, emotion | [ |
| BL,EDA,EM,AU, BVP, IBI, EDA,HR | Collaboration | [ | |
| LOG, SR, AU | Online learning | [ | |
| FR, EC | Embodied learning behavior | [ | |
| PS, EM, LOG | Calligraphy training | [ | |
| PS, GSR, ST, LOG | Online learning problem solving | [ | |
| BL, MP, AU | Collaboration | [ | |
| HER, AR | Language learning | [ | |
| EDA, ECG | Collaboration | [ | |
| EEG, LOG | Cognition | [ | |
| - | Collaboration | [ | |
| MP, OR | Teaching behavior | [ | |
| VB, ONLINE | Emotion | [ | |
| EM, BL | Collaboration | [ | |
| EDA, PS | - | [ | |
| EC, MP | Collaboration | [ | |
| FE, EM, GSR | Learning performance | [ | |
| EM, FA, LOG | Learning difficulties | [ | |
| EM, LOG | Cognition | [ | |
| EM, AU, LOG | Engagement, collaboration, learning performance | [ |
Figure 4Data integration methods.
Figure 5Three-dimensional features of data integration in MMLA.