| Literature DB >> 34491289 |
Sebo Uithol1,2, Katherine L Bryant1,3, Ivan Toni1, Rogier B Mars1,3.
Abstract
Humans have a remarkable capacity to arrange and rearrange perceptual input according to different categorizations. This begs the question whether the categorization is exclusively a higher visual or amodal process, or whether categorization processes influence early visual areas as well. To investigate this we scanned healthy participants in a magnetic resonance imaging scanner during a conceptual decision task in which participants had to answer questions about upcoming images of animals. Early visual cortices (V1 and V2) contained information about the current visual input, about the granularity of the forthcoming categorical decision, as well as perceptual expectations about the upcoming visual stimulus. The middle temporal gyrus, the anterior temporal lobe, and the inferior frontal gyrus were also involved in the categorization process, constituting an attention and control network that modulates perceptual processing. These findings provide further evidence that early visual processes are driven by conceptual expectations and task demands.Entities:
Keywords: FMRI; MVPA; conceptual knowledge; visual Categorization
Mesh:
Year: 2021 PMID: 34491289 PMCID: PMC8567999 DOI: 10.1093/cercor/bhab163
Source DB: PubMed Journal: Cereb Cortex ISSN: 1047-3211 Impact factor: 5.357
Figure 1
Overview of 1 trial.
Figure 2
Panel (A): Decoding accuracy maps of above change decoding of frogs versus dogs in percentages (P < 0.001, family-wise error corrected at the cluster level) for basic-level decoding (red–yellow) and superordinate-level decoding (blue–green). Panel (B): ROI V1 comparison of decoding accuracy between basic-level (left) and superordinate level (right) decoding accuracy. Scale denotes percentage above-chance level (50%); maximum decoding values can be above this range. Whiskers show 95% confidence interval.
Figure 3
Results MVPA anticipation analysis. Panel (A): Decoding accuracy map above change (family-wise error corrected at the cluster level, P < 0.001) for cross-modal (questions and images) decoding of frogs and dogs. Panel (B): ROI comparison of the 3 types of cross-validation: questions–questions; questions–gray screen; and questions–images. Boxes portray decoding accuracy; whiskers signify a 95% confidence interval. Scale denotes percentage above-chance level (50%); maximum decoding values can be above this range.
Figure 4
Decoding accuracy map above change (family-wise error corrected at the cluster level, P < 0.001) decoding of question levels (basic-level vs. superordinate level questions). Scale denotes percentage above-chance level (50%); maximum decoding values can be above this range.