Literature DB >> 26741802

Are Face and Object Recognition Independent? A Neurocomputational Modeling Exploration.

Panqu Wang1, Isabel Gauthier2, Garrison Cottrell1.   

Abstract

Are face and object recognition abilities independent? Although it is commonly believed that they are, Gauthier et al. [Gauthier, I., McGugin, R. W., Richler, J. J., Herzmann, G., Speegle, M., & VanGulick, A. E. Experience moderates overlap between object and face recognition, suggesting a common ability. Journal of Vision, 14, 7, 2014] recently showed that these abilities become more correlated as experience with nonface categories increases. They argued that there is a single underlying visual ability, v, that is expressed in performance with both face and nonface categories as experience grows. Using the Cambridge Face Memory Test and the Vanderbilt Expertise Test, they showed that the shared variance between Cambridge Face Memory Test and Vanderbilt Expertise Test performance increases monotonically as experience increases. Here, we address why a shared resource across different visual domains does not lead to competition and to an inverse correlation in abilities? We explain this conundrum using our neurocomputational model of face and object processing ["The Model", TM, Cottrell, G. W., & Hsiao, J. H. Neurocomputational models of face processing. In A. J. Calder, G. Rhodes, M. Johnson, & J. Haxby (Eds.), The Oxford handbook of face perception. Oxford, UK: Oxford University Press, 2011]. We model the domain general ability v as the available computational resources (number of hidden units) in the mapping from input to label and experience as the frequency of individual exemplars in an object category appearing during network training. Our results show that, as in the behavioral data, the correlation between subordinate level face and object recognition accuracy increases as experience grows. We suggest that different domains do not compete for resources because the relevant features are shared between faces and objects. The essential power of experience is to generate a "spreading transform" for faces (separating them in representational space) that generalizes to objects that must be individuated. Interestingly, when the task of the network is basic level categorization, no increase in the correlation between domains is observed. Hence, our model predicts that it is the type of experience that matters and that the source of the correlation is in the fusiform face area, rather than in cortical areas that subserve basic level categorization. This result is consistent with our previous modeling elucidating why the FFA is recruited for novel domains of expertise [Tong, M. H., Joyce, C. A., & Cottrell, G. W. Why is the fusiform face area recruited for novel categories of expertise? A neurocomputational investigation. Brain Research, 1202, 14-24, 2008].

Entities:  

Mesh:

Year:  2016        PMID: 26741802     DOI: 10.1162/jocn_a_00919

Source DB:  PubMed          Journal:  J Cogn Neurosci        ISSN: 0898-929X            Impact factor:   3.225


  6 in total

1.  Not so fast! Response times in the computerized Benton Face Recognition Test may not reflect face recognition ability.

Authors:  Joseph DeGutis; Xian Li; Bar Yosef; Maruti V Mishra
Journal:  Cogn Neuropsychol       Date:  2022 May-Jun       Impact factor: 3.750

2.  Editorial: Face Perception across the Life-Span.

Authors:  Bozana Meinhardt-Injac; Andrea Hildebrandt
Journal:  Front Psychol       Date:  2016-08-31

3.  The potential of neuroscience for health sciences education: towards convergence of evidence and resisting seductive allure.

Authors:  Anique B H de Bruin
Journal:  Adv Health Sci Educ Theory Pract       Date:  2016-11-07       Impact factor: 3.853

4.  The Neural Dynamics of Facial Identity Processing: Insights from EEG-Based Pattern Analysis and Image Reconstruction.

Authors:  Dan Nemrodov; Matthias Niemeier; Ashutosh Patel; Adrian Nestor
Journal:  eNeuro       Date:  2018-02-26

5.  Caricatured facial movements enhance perception of emotional facial expressions.

Authors:  Nicholas Furl; Forida Begum; Francesca Pizzorni Ferrarese; Sarah Jans; Caroline Woolley; Justin Sulik
Journal:  Perception       Date:  2022-03-28       Impact factor: 1.695

6.  Infants exploit vowels to label objects and actions from continuous audiovisual stimuli.

Authors:  Cristina Jara; Cristóbal Moënne-Loccoz; Marcela Peña
Journal:  Sci Rep       Date:  2021-05-26       Impact factor: 4.379

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.