Literature DB >> 28343000

Evidence for similar patterns of neural activity elicited by picture- and word-based representations of natural scenes.

Manoj Kumar1, Kara D Federmeier2, Li Fei-Fei3, Diane M Beck2.   

Abstract

A long-standing core question in cognitive science is whether different modalities and representation types (pictures, words, sounds, etc.) access a common store of semantic information. Although different input types have been shown to activate a shared network of brain regions, this does not necessitate that there is a common representation, as the neurons in these regions could still differentially process the different modalities. However, multi-voxel pattern analysis can be used to assess whether, e.g., pictures and words evoke a similar pattern of activity, such that the patterns that separate categories in one modality transfer to the other. Prior work using this method has found support for a common code, but has two limitations: they have either only examined disparate categories (e.g. animals vs. tools) that are known to activate different brain regions, raising the possibility that the pattern separation and inferred similarity reflects only large scale differences between the categories or they have been limited to individual object representations. By using natural scene categories, we not only extend the current literature on cross-modal representations beyond objects, but also, because natural scene categories activate a common set of brain regions, we identify a more fine-grained (i.e. higher spatial resolution) common representation. Specifically, we studied picture- and word-based representations of natural scene stimuli from four different categories: beaches, cities, highways, and mountains. Participants passively viewed blocks of either phrases (e.g. "sandy beach") describing scenes or photographs from those same scene categories. To determine whether the phrases and pictures evoke a common code, we asked whether a classifier trained on one stimulus type (e.g. phrase stimuli) would transfer (i.e. cross-decode) to the other stimulus type (e.g. picture stimuli). The analysis revealed cross-decoding in the occipitotemporal, posterior parietal and frontal cortices. This similarity of neural activity patterns across the two input types, for categories that co-activate local brain regions, provides strong evidence of a common semantic code for pictures and words in the brain.
Copyright © 2017 Elsevier Inc. All rights reserved.

Entities:  

Keywords:  MVPA; Natural scenes; Pictures; Semantics; Words; fMRI

Mesh:

Year:  2017        PMID: 28343000     DOI: 10.1016/j.neuroimage.2017.03.037

Source DB:  PubMed          Journal:  Neuroimage        ISSN: 1053-8119            Impact factor:   6.556


  4 in total

1.  Discovering Event Structure in Continuous Narrative Perception and Memory.

Authors:  Christopher Baldassano; Janice Chen; Asieh Zadbood; Jonathan W Pillow; Uri Hasson; Kenneth A Norman
Journal:  Neuron       Date:  2017-08-02       Impact factor: 17.173

Review 2.  Scene Perception in the Human Brain.

Authors:  Russell A Epstein; Chris I Baker
Journal:  Annu Rev Vis Sci       Date:  2019-06-21       Impact factor: 6.422

3.  A Double Dissociation in Sensitivity to Verb and Noun Semantics Across Cortical Networks.

Authors:  Giulia V Elli; Connor Lane; Marina Bedny
Journal:  Cereb Cortex       Date:  2019-12-17       Impact factor: 5.357

4.  Resolving the time course of visual and auditory object categorization.

Authors:  Polina Iamshchinina; Agnessa Karapetian; Daniel Kaiser; Radoslaw M Cichy
Journal:  J Neurophysiol       Date:  2022-05-18       Impact factor: 2.974

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.