| Literature DB >> 31932865 |
Michael Wiesing1, Gereon R Fink2,3, Ralph Weidner2, Simone Vossel2,4.
Abstract
The visual system forms predictions about upcoming visual features based on previous visual experiences. Such predictions impact on current perception, so that expected stimuli can be detected faster and with higher accuracy. A key question is how these predictions are formed and on which levels of processing they arise. Particularly, predictions could be formed on early levels of processing, where visual features are represented separately, or might require higher levels of processing, with predictions formed based on full object representations that involve combinations of visual features. In four experiments, the present study investigated whether the visual system forms joint prediction errors or whether expectations about different visual features such as color and orientation are formed independently. The first experiment revealed that task-irrelevant and implicitly learned expectations were formed independently when the features were separately bound to different objects. In a second experiment, no evidence for a mutual influence of both types of task-irrelevant and implicitly formed feature expectations was observed, although both visual features were assigned to the same objects. A third experiment confirmed the findings of the previous experiments for explicitly rather than implicitly formed expectations. Finally, no evidence for a mutual influence of different feature expectations was observed when features were assigned to a single centrally presented object. Overall, the present results do not support the view that object feature binding generates joint feature-based expectancies of different object features. Rather, the results suggest that expectations for color and orientation are processed and resolved independently at the feature level.Entities:
Keywords: Feature binding; Feature expectancies; Object binding; Prediction error; Probabilistic context
Year: 2020 PMID: 31932865 PMCID: PMC7007893 DOI: 10.1007/s00221-019-05710-z
Source DB: PubMed Journal: Exp Brain Res ISSN: 0014-4819 Impact factor: 1.972
Fig. 1Stimulus examples of Experiment 1. The participants were asked to indicate by button presses whether the two gratings had the same or different spatial frequency. The probabilities of occurrence of the colors of one grating and the orientations of the other grating were manipulated to induce feature expectations
Fig. 2Performance measures of each combination of color and orientation manipulations of Experiment 1. a Error rates. b Reaction times. Error bars reflect the 95% confidence intervals
Fig. 3Stimulus examples of Experiment 2. As in Experiment 1, participants were asked to respond to the spatial frequency of the two gratings, which could be the same or different. The probabilities of occurrence of both color and orientation of the identical gratings were manipulated to induce feature expectations
Fig. 4Performance measures of the combination of color and orientation manipulations of Experiment 2. a Error rates. b Reaction times. Error bars reflect the 95% confidence intervals
Fig. 5Performance measures of the combination of color and orientation manipulations of Experiment 3. a Error rates. b Reaction times. Error bars reflect the 95% confidence intervals
Fig. 6Stimulus examples of Experiment 4. Participants were asked to respond to the spatial frequency of the grating, which could be high or low. The probabilities of occurrence of both color and orientation of the single grating were manipulated to induce feature expectations
Fig. 7Performance measures of the combination of color and orientation manipulations of Experiment 4. a Error rates. b Reaction times. Error bars reflect the 95% confidence intervals