| Literature DB >> 25329087 |
Xuelian Zang1, Lina Jia2, Hermann J Müller1, Zhuanghua Shi1.
Abstract
Our visual brain is remarkable in extracting invariant properties from the noisy environment, guiding selection of where to look and what to identify. However, how the brain achieves this is still poorly understood. Here we explore interactions of local context and global structure in the long-term learning and retrieval of invariant display properties. Participants searched for a target among distractors, without knowing that some "old" configurations were presented repeatedly (randomly inserted among "new" configurations). We simulated tunnel vision, limiting the visible region around fixation. Robust facilitation of performance for old versus new contexts was observed when the visible region was large but not when it was small. However, once the display was made fully visible during the subsequent transfer phase, facilitation did become manifest. Furthermore, when participants were given a brief preview of the total display layout prior to tunnel view search with 2 items visible, facilitation was already obtained during the learning phase. The eye movement results revealed contextual facilitation to be coupled with changes of saccadic planning, characterized by slightly extended gaze durations but a reduced number of fixations and shortened scan paths for old displays. Taken together, our findings show that invariant spatial display properties can be acquired based on scarce, para-/foveal information, while their effective retrieval for search guidance requires the availability (even if brief) of a certain extent of peripheral information. (c) 2015 APA, all rights reserved).Entities:
Mesh:
Year: 2014 PMID: 25329087 DOI: 10.1037/xlm0000060
Source DB: PubMed Journal: J Exp Psychol Learn Mem Cogn ISSN: 0278-7393 Impact factor: 3.051