Literature DB >> 32467356

Disentangling the Independent Contributions of Visual and Conceptual Features to the Spatiotemporal Dynamics of Scene Categorization.

Michelle R Greene1, Bruce C Hansen2.   

Abstract

Human scene categorization is characterized by its remarkable speed. While many visual and conceptual features have been linked to this ability, significant correlations exist between feature spaces, impeding our ability to determine their relative contributions to scene categorization. Here, we used a whitening transformation to decorrelate a variety of visual and conceptual features and assess the time course of their unique contributions to scene categorization. Participants (both sexes) viewed 2250 full-color scene images drawn from 30 different scene categories while having their brain activity measured through 256-channel EEG. We examined the variance explained at each electrode and time point of visual event-related potential (vERP) data from nine different whitened encoding models. These ranged from low-level features obtained from filter outputs to high-level conceptual features requiring human annotation. The amount of category information in the vERPs was assessed through multivariate decoding methods. Behavioral similarity measures were obtained in separate crowdsourced experiments. We found that all nine models together contributed 78% of the variance of human scene similarity assessments and were within the noise ceiling of the vERP data. Low-level models explained earlier vERP variability (88 ms after image onset), whereas high-level models explained later variance (169 ms). Critically, only high-level models shared vERP variability with behavior. Together, these results suggest that scene categorization is primarily a high-level process, but reliant on previously extracted low-level features.SIGNIFICANCE STATEMENT In a single fixation, we glean enough information to describe a general scene category. Many types of features are associated with scene categories, ranging from low-level properties, such as colors and contours, to high-level properties, such as objects and attributes. Because these properties are correlated, it is difficult to understand each property's unique contributions to scene categorization. This work uses a whitening transformation to remove the correlations between features and examines the extent to which each feature contributes to visual event-related potentials over time. We found that low-level visual features contributed first but were not correlated with categorization behavior. High-level features followed 80 ms later, providing key insights into how the brain makes sense of a complex visual world.
Copyright © 2020 the authors.

Entities:  

Keywords:  EEG; decoding; encoding; natural scenes

Mesh:

Year:  2020        PMID: 32467356      PMCID: PMC7329300          DOI: 10.1523/JNEUROSCI.2088-19.2020

Source DB:  PubMed          Journal:  J Neurosci        ISSN: 0270-6474            Impact factor:   6.167


  64 in total

1.  An electrophysiological study of scene effects on object identification.

Authors:  Giorgio Ganis; Marta Kutas
Journal:  Brain Res Cogn Brain Res       Date:  2003-04

2.  Temporal components in the parahippocampal place area revealed by human intracerebral recordings.

Authors:  Julien Bastin; Juan R Vidal; Seth Bouvier; Marcela Perrone-Bertolotti; Damien Bénis; Philippe Kahane; Olivier David; Jean-Philippe Lachaux; Russell A Epstein
Journal:  J Neurosci       Date:  2013-06-12       Impact factor: 6.167

3.  Visual information representation and rapid-scene categorization are simultaneous across cortex: An MEG study.

Authors:  Pavan Ramkumar; Bruce C Hansen; Sebastian Pannasch; Lester C Loschky
Journal:  Neuroimage       Date:  2016-03-18       Impact factor: 6.556

4.  Constructing scenes from objects in human occipitotemporal cortex.

Authors:  Sean P MacEvoy; Russell A Epstein
Journal:  Nat Neurosci       Date:  2011-09-04       Impact factor: 24.884

5.  Statistics of high-level scene context.

Authors:  Michelle R Greene
Journal:  Front Psychol       Date:  2013-10-29

6.  Metamers of the ventral stream.

Authors:  Jeremy Freeman; Eero P Simoncelli
Journal:  Nat Neurosci       Date:  2011-08-14       Impact factor: 24.884

7.  Deep neural networks rival the representation of primate IT cortex for core visual object recognition.

Authors:  Charles F Cadieu; Ha Hong; Daniel L K Yamins; Nicolas Pinto; Diego Ardila; Ethan A Solomon; Najib J Majaj; James J DiCarlo
Journal:  PLoS Comput Biol       Date:  2014-12-18       Impact factor: 4.475

8.  Deep supervised, but not unsupervised, models may explain IT cortical representation.

Authors:  Seyed-Mahdi Khaligh-Razavi; Nikolaus Kriegeskorte
Journal:  PLoS Comput Biol       Date:  2014-11-06       Impact factor: 4.475

9.  Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior.

Authors:  Iris Ia Groen; Michelle R Greene; Christopher Baldassano; Li Fei-Fei; Diane M Beck; Chris I Baker
Journal:  Elife       Date:  2018-03-07       Impact factor: 8.140

10.  Dynamics of scene representations in the human brain revealed by magnetoencephalography and deep neural networks.

Authors:  Radoslaw Martin Cichy; Aditya Khosla; Dimitrios Pantazis; Aude Oliva
Journal:  Neuroimage       Date:  2016-04-01       Impact factor: 6.556

View more
  4 in total

1.  Behavioral and neural representations en route to intuitive action understanding.

Authors:  Leyla Tarhan; Julian De Freitas; Talia Konkle
Journal:  Neuropsychologia       Date:  2021-10-12       Impact factor: 3.139

2.  Decoding Neural Representations of Affective Scenes in Retinotopic Visual Cortex.

Authors:  Ke Bo; Siyang Yin; Yuelu Liu; Zhenhong Hu; Sreenivasan Meyyappan; Sungkean Kim; Andreas Keil; Mingzhou Ding
Journal:  Cereb Cortex       Date:  2021-05-10       Impact factor: 5.357

3.  Low-Frequency Entrainment to Visual Motion Underlies Sign Language Comprehension.

Authors:  E A Malaia; S C Borneman; J Krebs; R B Wilbur
Journal:  IEEE Trans Neural Syst Rehabil Eng       Date:  2021-12-03       Impact factor: 3.802

4.  The N300: An Index for Predictive Coding of Complex Visual Objects and Scenes.

Authors:  Manoj Kumar; Kara D Federmeier; Diane M Beck
Journal:  Cereb Cortex Commun       Date:  2021-04-21
  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.