Literature DB >> 33229549

Controversial stimuli: Pitting neural networks against each other as models of human cognition.

Tal Golan1, Prashant C Raju2, Nikolaus Kriegeskorte1,3,4,5.   

Abstract

Distinct scientific theories can make similar predictions. To adjudicate between theories, we must design experiments for which the theories make distinct predictions. Here we consider the problem of comparing deep neural networks as models of human visual recognition. To efficiently compare models' ability to predict human responses, we synthesize controversial stimuli: images for which different models produce distinct responses. We applied this approach to two visual recognition tasks, handwritten digits (MNIST) and objects in small natural images (CIFAR-10). For each task, we synthesized controversial stimuli to maximize the disagreement among models which employed different architectures and recognition algorithms. Human subjects viewed hundreds of these stimuli, as well as natural examples, and judged the probability of presence of each digit/object category in each image. We quantified how accurately each model predicted the human judgments. The best-performing models were a generative analysis-by-synthesis model (based on variational autoencoders) for MNIST and a hybrid discriminative-generative joint energy model for CIFAR-10. These deep neural networks (DNNs), which model the distribution of images, performed better than purely discriminative DNNs, which learn only to map images to labels. None of the candidate models fully explained the human responses. Controversial stimuli generalize the concept of adversarial examples, obviating the need to assume a ground-truth model. Unlike natural images, controversial stimuli are not constrained to the stimulus distribution models are trained on, thus providing severe out-of-distribution tests that reveal the models' inductive biases. Controversial stimuli therefore provide powerful probes of discrepancies between models and human perception.

Entities:  

Keywords:  adversarial examples; deep neural networks; generative modeling; optimal experimental design; visual object recognition

Year:  2020        PMID: 33229549      PMCID: PMC7703564          DOI: 10.1073/pnas.1912334117

Source DB:  PubMed          Journal:  Proc Natl Acad Sci U S A        ISSN: 0027-8424            Impact factor:   11.205


  11 in total

Review 1.  Using goal-driven deep learning models to understand sensory cortex.

Authors:  Daniel L K Yamins; James J DiCarlo
Journal:  Nat Neurosci       Date:  2016-03       Impact factor: 24.884

2.  Evaluating (and Improving) the Correspondence Between Deep Neural Networks and Human Representations.

Authors:  Joshua C Peterson; Joshua T Abbott; Thomas L Griffiths
Journal:  Cogn Sci       Date:  2018-09-03

3.  Large-Scale, High-Resolution Comparison of the Core Visual Object Recognition Behavior of Humans, Monkeys, and State-of-the-Art Deep Artificial Neural Networks.

Authors:  Rishi Rajalingham; Elias B Issa; Pouya Bashivan; Kohitij Kar; Kailyn Schmidt; James J DiCarlo
Journal:  J Neurosci       Date:  2018-07-13       Impact factor: 6.167

4.  Maximum differentiation (MAD) competition: a methodology for comparing computational models of perceptual quantities.

Authors:  Zhou Wang; Eero P Simoncelli
Journal:  J Vis       Date:  2008-09-23       Impact factor: 2.240

5.  Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing.

Authors:  Nikolaus Kriegeskorte
Journal:  Annu Rev Vis Sci       Date:  2015-11-24       Impact factor: 6.422

6.  Humans can decipher adversarial images.

Authors:  Zhenglong Zhou; Chaz Firestone
Journal:  Nat Commun       Date:  2019-03-22       Impact factor: 14.919

7.  Deep convolutional networks do not classify based on global object shape.

Authors:  Nicholas Baker; Hongjing Lu; Gennady Erlikhman; Philip J Kellman
Journal:  PLoS Comput Biol       Date:  2018-12-07       Impact factor: 4.475

8.  Individual differences among deep neural network models.

Authors:  Johannes Mehrer; Courtney J Spoerer; Nikolaus Kriegeskorte; Tim C Kietzmann
Journal:  Nat Commun       Date:  2020-11-12       Impact factor: 14.919

9.  Deep Neural Networks as a Computational Model for Human Shape Sensitivity.

Authors:  Jonas Kubilius; Stefania Bracci; Hans P Op de Beeck
Journal:  PLoS Comput Biol       Date:  2016-04-28       Impact factor: 4.475

10.  Deep Convolutional Neural Networks Outperform Feature-Based But Not Categorical Models in Explaining Object Similarity Judgments.

Authors:  Kamila M Jozwik; Nikolaus Kriegeskorte; Katherine R Storrs; Marieke Mur
Journal:  Front Psychol       Date:  2017-10-09
View more
  12 in total

1.  The brain produces mind by modeling.

Authors:  Richard M Shiffrin; Danielle S Bassett; Nikolaus Kriegeskorte; Joshua B Tenenbaum
Journal:  Proc Natl Acad Sci U S A       Date:  2020-11-24       Impact factor: 11.205

2.  The neural architecture of language: Integrative modeling converges on predictive processing.

Authors:  Martin Schrimpf; Idan Asher Blank; Greta Tuckute; Carina Kauf; Eghbal A Hosseini; Nancy Kanwisher; Joshua B Tenenbaum; Evelina Fedorenko
Journal:  Proc Natl Acad Sci U S A       Date:  2021-11-09       Impact factor: 11.205

3.  Face dissimilarity judgments are predicted by representational distance in morphable and image-computable models.

Authors:  Kamila M Jozwik; Jonathan O'Keeffe; Katherine R Storrs; Wenxuan Guo; Tal Golan; Nikolaus Kriegeskorte
Journal:  Proc Natl Acad Sci U S A       Date:  2022-06-29       Impact factor: 12.779

4.  Deep neural network models of sound localization reveal how perception is adapted to real-world environments.

Authors:  Andrew Francl; Josh H McDermott
Journal:  Nat Hum Behav       Date:  2022-01-27

5.  Unsupervised neural network models of the ventral visual stream.

Authors:  Chengxu Zhuang; Siming Yan; Aran Nayebi; Martin Schrimpf; Michael C Frank; James J DiCarlo; Daniel L K Yamins
Journal:  Proc Natl Acad Sci U S A       Date:  2021-01-19       Impact factor: 12.779

6.  Five points to check when comparing visual perception in humans and machines.

Authors:  Christina M Funke; Judy Borowski; Karolina Stosio; Wieland Brendel; Thomas S A Wallis; Matthias Bethge
Journal:  J Vis       Date:  2021-03-01       Impact factor: 2.240

7.  Distinguishing mirror from glass: A "big data" approach to material perception.

Authors:  Hideki Tamura; Konrad Eugen Prokott; Roland W Fleming
Journal:  J Vis       Date:  2022-03-02       Impact factor: 2.240

8.  Using deep learning to predict human decisions and using cognitive models to explain deep learning models.

Authors:  Matan Fintz; Margarita Osadchy; Uri Hertz
Journal:  Sci Rep       Date:  2022-03-18       Impact factor: 4.379

9.  Unsupervised learning predicts human perception and misperception of gloss.

Authors:  Katherine R Storrs; Barton L Anderson; Roland W Fleming
Journal:  Nat Hum Behav       Date:  2021-05-06

10.  Grounding deep neural network predictions of human categorization behavior in understandable functional features: The case of face identity.

Authors:  Christoph Daube; Tian Xu; Jiayu Zhan; Andrew Webb; Robin A A Ince; Oliver G B Garrod; Philippe G Schyns
Journal:  Patterns (N Y)       Date:  2021-09-10
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.