| Literature DB >> 35513403 |
Laura Seminati1, Jacob Hadnett-Hunter2,3, Richard Joiner2, Karin Petrini2,4.
Abstract
Individuals are increasingly relying on GPS devices to orient and find their way in their environment and research has pointed to a negative impact of navigational systems on spatial memory. We used immersive virtual reality to examine whether an audio-visual navigational aid can counteract the negative impact of visual only or auditory only GPS systems. We also examined the effect of spatial representation preferences and abilities when using different GPS systems. Thirty-four participants completed an IVR driving game including 4 GPS conditions (No GPS; audio GPS; visual GPS; audio-visual GPS). After driving one of the routes in one of the 4 GPS conditions, participants were asked to drive to a target landmark they had previously encountered. The audio-visual GPS condition returned more accurate performance than the visual and no GPS condition. General orientation ability predicted the distance to the target landmark for the visual and the audio-visual GPS conditions, while landmark preference predicted performance in the audio GPS condition. Finally, the variability in end distance to the target landmark was significantly reduced in the audio-visual GPS condition when compared to the visual and audio GPS conditions. These findings support theories of spatial cognition and inform the optimisation of GPS designs.Entities:
Mesh:
Year: 2022 PMID: 35513403 PMCID: PMC9072375 DOI: 10.1038/s41598-022-11124-9
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.996
Figure 1VR environment. (a) The top-down view of the IVR city environment with a route’s example (the route matches the trajectory in Fig. 2a). (b) The experimental setting. (c–e) Examples of target landmarks on the routes; “Police Station”, “Chinese Restaurant”, “Hospital” respectively. (f) Task representation and examples of participant’s view during the encoding and the test phase. In the example the participant was tested in the No GPS condition and was informed that the target he/she had to drive to at a later time was the police station (the route the participant drove through in this condition passed by the police station but did not stop there. During the test phase the participant was told by the screen to find the police station by driving in a blank city environment. To create Fig. 1. CorelDRAW 2020 (64-Bit) was used (https://www.coreldraw.com/en/pages/coreldraw-2020/).
Figure 2Examples of a participant’s performances in the test phase for the five routes (a–e) for each GPS condition. The dotted line refers to the entire route used in the encoding phase; the red point refers to the target landmark of the route. (a) matches the route shown in Fig. 1a.
Figure 3Results of the performances and general orientation ability in the different conditions. The left top panel shows a boxplot for end distance error for the different GPS conditions. The central top panel shows a boxplot for the time participant took to arrive to the target landmark for the different GPS conditions. The right top panel shows a boxplot for the standard deviation for the end distance error in the different GPS conditions. The bottom left panel shows the relation between individual “general orientation ability” and “end distance” error in the visual GPS condition, the bottom middle panel shows the relation between individual “general orientation ability” and “end distance” error in the audio–visual GPS condition, and the bottom right panel shows the relation between individual “general orientation ability” and “variability” in end distance error (or standard deviation) in the audio–visual GPS condition. The red line in the boxplot represents the median and the box the interquartile range (IQR). Note: Friedman’s and Wilcoxon’s tests are carried out by comparing mean ranks between conditions, but we reported medians and IQRs as descriptive measures as they are more commonly reported for non-parametric tests. See supplemental material for measures of mean rank.