| Literature DB >> 31110195 |
Paolo Papale1, Monica Betta1, Giacomo Handjaras1, Giulia Malfatti2, Luca Cecchetti1, Alessandra Rampinini1, Pietro Pietrini1, Emiliano Ricciardi1, Luca Turella2, Andrea Leo3.
Abstract
Biological vision relies on representations of the physical world at different levels of complexity. Relevant features span from simple low-level properties, as contrast and spatial frequencies, to object-based attributes, as shape and category. However, how these features are integrated into coherent percepts is still debated. Moreover, these dimensions often share common biases: for instance, stimuli from the same category (e.g., tools) may have similar shapes. Here, using magnetoencephalography, we revealed the temporal dynamics of feature processing in human subjects attending to objects from six semantic categories. By employing Relative Weights Analysis, we mitigated collinearity between model-based descriptions of stimuli and showed that low-level properties (contrast and spatial frequencies), shape (medial-axis) and category are represented within the same spatial locations early in time: 100-150 ms after stimulus onset. This fast and overlapping processing may result from independent parallel computations, with categorical representation emerging later than the onset of low-level feature processing, yet before shape coding. Categorical information is represented both before and after shape, suggesting a role for this feature in the refinement of categorical matching.Entities:
Year: 2019 PMID: 31110195 PMCID: PMC6527710 DOI: 10.1038/s41598-019-43956-3
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Different representations of a natural image. A real-world scene (A), depicting two giraffes in the savannah, can be defined by its edges (B), by the shape of the giraffes (C) and also by the categorical information it conveys (D). Photo taken from http://pixabay.com, released under Creative Commons CC0 license.
Figure 2Methodological pipeline. (A) Experimental design: subjects were asked to attend thirty object pictures during a semantic judgment task. (B) representational dissimilarity matrices (RDMs) of three models (low-level features, shape and category) were employed to predict the MEG representational geometry – in the central triangle, Spearman correlation values between models are reported. With Relative Weights Analysis (C), MEG RDMs were predicted using three orthogonal principal components (PCs 1–3) obtained from the models, and the resulting regression weights were back-transformed to determine the relative impact of each model on the overall prediction when controlling for the impact of model collinearity (see Methods). Photo taken and edited from http://pixabay.com, released under Creative Commons CC0 license.
Figure 3Results. Topographic plots of the group-level z-maps. Top-row reports the time bin. Black dots stand for significant channels within all the time-bin (p < 0.05, rank test, 100,000 permutations, TFCE corrected).