Literature DB >> 29035073

Contextual cueing of tactile search is coded in an anatomical reference frame.

Leonardo Assumpção1, Zhuanghua Shi1, Xuelian Zang1, Hermann J Müller1, Thomas Geyer1.   

Abstract

This work investigates the reference frame(s) underlying tactile context memory, a form of statistical learning in a tactile (finger) search task. In this task, if a searched-for target object is repeatedly encountered within a stable spatial arrangement of task-irrelevant distractors, detecting the target becomes more efficient over time (relative to nonrepeated arrangements), as learned target-distractor spatial associations come to guide tactile search, thus cueing attention to the target location. Since tactile search displays can be represented in several reference frames, including multiple external and an anatomical frame, in Experiment 1 we asked whether repeated search displays are represented in tactile memory with reference to an environment-centered or anatomical reference frame. In Experiment 2, we went on examining a hand-centered versus anatomical reference frame of tactile context memory. Observers performed a tactile search task, divided into a learning and test session. At the transition between the two sessions, we introduced postural manipulations of the hands (crossed ↔ uncrossed in Expt. 1; palm-up ↔ palm-down in Expt. 2) to determine the reference frame of tactile contextual cueing. In both experiments, target-distractor associations acquired during learning transferred to the test session when the placement of the target and distractors was held constant in anatomical, but not external, coordinates. In the latter, RTs were even slower for repeated displays. We conclude that tactile contextual learning is coded in an anatomical reference frame. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

Mesh:

Year:  2017        PMID: 29035073     DOI: 10.1037/xhp0000478

Source DB:  PubMed          Journal:  J Exp Psychol Hum Percept Perform        ISSN: 0096-1523            Impact factor:   3.332


  4 in total

1.  Crossmodal learning of target-context associations: When would tactile context predict visual search?

Authors:  Siyi Chen; Zhuanghua Shi; Xuelian Zang; Xiuna Zhu; Leonardo Assumpção; Hermann J Müller; Thomas Geyer
Journal:  Atten Percept Psychophys       Date:  2020-05       Impact factor: 2.199

2.  Influences of luminance contrast and ambient lighting on visual context learning and retrieval.

Authors:  Xuelian Zang; Lingyun Huang; Xiuna Zhu; Hermann J Müller; Zhuanghua Shi
Journal:  Atten Percept Psychophys       Date:  2020-11       Impact factor: 2.199

3.  Multisensory visuo-tactile context learning enhances the guidance of unisensory visual search.

Authors:  Siyi Chen; Zhuanghua Shi; Hermann J Müller; Thomas Geyer
Journal:  Sci Rep       Date:  2021-05-03       Impact factor: 4.379

4.  Searching on the Back: Attentional Selectivity in the Periphery of the Tactile Field.

Authors:  Elena Gherri; Felicity White; Elisabetta Ambron
Journal:  Front Psychol       Date:  2022-07-13
  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.