Literature DB >> 33234544

Correspondence between Monkey Visual Cortices and Layers of a Saliency Map Model Based on a Deep Convolutional Neural Network for Representations of Natural Images.

Nobuhiko Wagatsuma1, Akinori Hidaka2, Hiroshi Tamura3,4.   

Abstract

Attentional selection is a function that allocates the brain's computational resources to the most important part of a visual scene at a specific moment. Saliency map models have been proposed as computational models to predict attentional selection within a spatial location. Recent saliency map models based on deep convolutional neural networks (DCNNs) exhibit the highest performance for predicting the location of attentional selection and human gaze, which reflect overt attention. Trained DCNNs potentially provide insight into the perceptual mechanisms of biological visual systems. However, the relationship between artificial and neural representations used for determining attentional selection and gaze location remains unknown. To understand the mechanism underlying saliency map models based on DCNNs and the neural system of attentional selection, we investigated the correspondence between layers of a DCNN saliency map model and monkey visual areas for natural image representations. We compared the characteristics of the responses in each layer of the model with those of the neural representation in the primary visual (V1), intermediate visual (V4), and inferior temporal (IT) cortices. Regardless of the DCNN layer level, the characteristics of the responses were consistent with that of the neural representation in V1. We found marked peaks of correspondence between V1 and the early level and higher-intermediate-level layers of the model. These results provide insight into the mechanism of the trained DCNN saliency map model and suggest that the neural representations in V1 play an important role in computing the saliency that mediates attentional selection, which supports the V1 saliency hypothesis.
Copyright © 2021 Wagatsuma et al.

Entities:  

Keywords:  V1 saliency hypothesis; attention; computational model; deep learning; saliency map; visual system

Mesh:

Year:  2021        PMID: 33234544      PMCID: PMC7890521          DOI: 10.1523/ENEURO.0200-20.2020

Source DB:  PubMed          Journal:  eNeuro        ISSN: 2373-2822


  59 in total

1.  Visual segmentation by contextual influences via intra-cortical interactions in the primary visual cortex.

Authors:  Z Li
Journal:  Network       Date:  1999-05       Impact factor: 1.273

2.  Attention activates winner-take-all competition among visual filters.

Authors:  D K Lee; L Itti; C Koch; J Braun
Journal:  Nat Neurosci       Date:  1999-04       Impact factor: 24.884

3.  Tracking spike-amplitude changes to improve the quality of multineuronal data analysis.

Authors:  Hidekazu Kaneko; Hiroshi Tamura; Shinya S Suzuki
Journal:  IEEE Trans Biomed Eng       Date:  2007-02       Impact factor: 4.538

Review 4.  Visual attention: the past 25 years.

Authors:  Marisa Carrasco
Journal:  Vision Res       Date:  2011-04-28       Impact factor: 1.886

5.  Transformation from image-based to perceptual representation of materials along the human ventral visual pathway.

Authors:  Chihiro Hiramatsu; Naokazu Goda; Hidehiko Komatsu
Journal:  Neuroimage       Date:  2011-05-04       Impact factor: 6.556

6.  Intrinsic and extrinsic effects on image memorability.

Authors:  Zoya Bylinskii; Phillip Isola; Constance Bainbridge; Antonio Torralba; Aude Oliva
Journal:  Vision Res       Date:  2015-03-20       Impact factor: 1.886

7.  The role of attention in figure-ground segregation in areas V1 and V4 of the visual cortex.

Authors:  Jasper Poort; Florian Raudies; Aurel Wannig; Victor A F Lamme; Heiko Neumann; Pieter R Roelfsema
Journal:  Neuron       Date:  2012-07-12       Impact factor: 17.173

8.  Extensive integration field beyond the classical receptive field of cat's striate cortical neurons--classification and tuning properties.

Authors:  C Y Li; W Li
Journal:  Vision Res       Date:  1994-09       Impact factor: 1.886

9.  A model of proto-object based saliency.

Authors:  Alexander F Russell; Stefan Mihalaş; Rudiger von der Heydt; Ernst Niebur; Ralph Etienne-Cummings
Journal:  Vision Res       Date:  2013-10-31       Impact factor: 1.886

10.  Attentional modulation of speed-change perception in the perifoveal and near-peripheral visual field.

Authors:  Taoxi Yang; Hans Strasburger; Ernst Pöppel; Yan Bao
Journal:  PLoS One       Date:  2018-08-30       Impact factor: 3.240

View more
  2 in total

1.  Predictive coding of natural images by V1 firing rates and rhythmic synchronization.

Authors:  Cem Uran; Alina Peter; Andreea Lazar; William Barnes; Johanna Klon-Lipok; Katharine A Shapcott; Rasmus Roese; Pascal Fries; Wolf Singer; Martin Vinck
Journal:  Neuron       Date:  2022-02-03       Impact factor: 18.688

2.  Analysis based on neural representation of natural object surfaces to elucidate the mechanisms of a trained AlexNet model.

Authors:  Nobuhiko Wagatsuma; Akinori Hidaka; Hiroshi Tamura
Journal:  Front Comput Neurosci       Date:  2022-09-30       Impact factor: 3.387

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.