| Literature DB >> 35153916 |
Rui Liu1, Xiang Xu1, Hairu Yang1, Zhenhua Li1, Guan Huang1.
Abstract
Immersive 360-degree video has become a new learning resource because of its immersive sensory experience. This study examined the effects of textual and visual cues on learning and attention in immersive 360-degree video by using eye-tracking equipment integrated in a virtual reality head-mounted display. Participants (n = 110) were randomly assigned to one of four conditions: (1) no cues, (2) textual cues in the initial field of view (FOV), (3) textual cues outside the initial FOV, and (4) textual cues outside the initial FOV + visual cues. The results showed that the cues (annotations or annotations + arrows) helped learners achieve better learning outcomes and spend more time focusing on the areas with cues. In addition, the study found a serious imbalance in the distribution of learners' attention in each region of the video. The attention directed to textual cues in the initial FOV is much higher than the attention directed to textual cues outside the initial FOV. Adding visual cues can effectively direct attention to textual cues outside the initial FOV and alleviate the imbalance of attention distribution. Consequently, adding cues to immersive 360-degree video can be an appropriate approach to promote learning and guide attention in immersive 360-degree video learning environments. This study provided new insights into the design and development of immersive 360-degree video instructional resources.Entities:
Keywords: attention allocation; cues; eye-tracking technologies; immersive 360-degree video; learning outcome; signaling
Year: 2022 PMID: 35153916 PMCID: PMC8828640 DOI: 10.3389/fpsyg.2021.792069
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
FIGURE 1The experimental conditions and procedure.
FIGURE 2Snapshots of the four conditions.
Descriptive data for all variables under the four conditions.
| Dependent variables | NC group ( | TCIIF group ( | TCOIF group ( | TCOIF + VC group ( | ||||
|
|
|
|
|
|
|
|
| |
| Prior knowledge | 16.44 | 3.80 | 15.41 | 3.58 | 16.36 | 4.25 | 16.07 | 2.99 |
| Spatial ability | 9.63 | 2.29 | 10.70 | 2.16 | 10.04 | 2.30 | 9.61 | 1.93 |
| Learning outcome | 5.19 | 1.86 | 7.44 | 1.22 | 6.21 | 1.79 | 7.14 | 1.80 |
| Total fixation duration (in seconds) | 126.78 | 28.40 | 139.33 | 17.25 | 130.45 | 20.64 | 123.19 | 19.79 |
| Fixation duration on annotation AOIs (in seconds) | N/A | N/A | 14.03 | 5.60 | 6.98 | 3.86 | 13.64 | 6.06 |
| Fixation duration on initial FOV AOIs (in seconds) | 123.99 | 29.07 | 138.08 | 18.61 | 127.99 | 21.65 | 116.05 | 20.53 |
The maximum score on the prior knowledge test was 25; the maximum score on the spatial ability test was 15; and the maximum score on the learning outcome test was 11.
FIGURE 3Fixation heatmaps of the four conditions.