Literature DB >> 32876562

What do adversarial images tell us about human vision?

Marin Dujmović1, Gaurav Malhotra1, Jeffrey S Bowers1.   

Abstract

Deep convolutional neural networks (DCNNs) are frequently described as the best current models of human and primate vision. An obvious challenge to this claim is the existence of adversarial images that fool DCNNs but are uninterpretable to humans. However, recent research has suggested that there may be similarities in how humans and DCNNs interpret these seemingly nonsense images. We reanalysed data from a high-profile paper and conducted five experiments controlling for different ways in which these images can be generated and selected. We show human-DCNN agreement is much weaker and more variable than previously reported, and that the weak agreement is contingent on the choice of adversarial images and the design of the experiment. Indeed, we find there are well-known methods of generating images for which humans show no agreement with DCNNs. We conclude that adversarial images still pose a challenge to theorists using DCNNs as models of human vision.
© 2020, Dujmović et al.

Entities:  

Keywords:  adversarial images; deep neural networks; human; human vision; neuroscience

Mesh:

Year:  2020        PMID: 32876562      PMCID: PMC7467732          DOI: 10.7554/eLife.55978

Source DB:  PubMed          Journal:  Elife        ISSN: 2050-084X            Impact factor:   8.140


  17 in total

1.  Evidence for complete translational and reflectional invariance in visual object priming.

Authors:  I Biederman; E E Cooper
Journal:  Perception       Date:  1991       Impact factor: 1.490

2.  Hiding a plane with a pixel: examining shape-bias in CNNs and the benefit of building in biological constraints.

Authors:  Gaurav Malhotra; Benjamin D Evans; Jeffrey S Bowers
Journal:  Vision Res       Date:  2020-06-28       Impact factor: 1.886

3.  Surface versus edge-based determinants of visual recognition.

Authors:  I Biederman; G Ju
Journal:  Cogn Psychol       Date:  1988-01       Impact factor: 3.468

4.  Performance-optimized hierarchical models predict neural responses in higher visual cortex.

Authors:  Daniel L K Yamins; Ha Hong; Charles F Cadieu; Ethan A Solomon; Darren Seibert; James J DiCarlo
Journal:  Proc Natl Acad Sci U S A       Date:  2014-05-08       Impact factor: 11.205

Review 5.  Using goal-driven deep learning models to understand sensory cortex.

Authors:  Daniel L K Yamins; James J DiCarlo
Journal:  Nat Neurosci       Date:  2016-03       Impact factor: 24.884

6.  Recognition-by-components: a theory of human image understanding.

Authors:  Irving Biederman
Journal:  Psychol Rev       Date:  1987-04       Impact factor: 8.934

7.  Evaluating (and Improving) the Correspondence Between Deep Neural Networks and Human Representations.

Authors:  Joshua C Peterson; Joshua T Abbott; Thomas L Griffiths
Journal:  Cogn Sci       Date:  2018-09-03

8.  Large-Scale, High-Resolution Comparison of the Core Visual Object Recognition Behavior of Humans, Monkeys, and State-of-the-Art Deep Artificial Neural Networks.

Authors:  Rishi Rajalingham; Elias B Issa; Pouya Bashivan; Kohitij Kar; Kailyn Schmidt; James J DiCarlo
Journal:  J Neurosci       Date:  2018-07-13       Impact factor: 6.167

9.  Deep neural networks rival the representation of primate IT cortex for core visual object recognition.

Authors:  Charles F Cadieu; Ha Hong; Daniel L K Yamins; Nicolas Pinto; Diego Ardila; Ethan A Solomon; Najib J Majaj; James J DiCarlo
Journal:  PLoS Comput Biol       Date:  2014-12-18       Impact factor: 4.475

10.  Deep supervised, but not unsupervised, models may explain IT cortical representation.

Authors:  Seyed-Mahdi Khaligh-Razavi; Nikolaus Kriegeskorte
Journal:  PLoS Comput Biol       Date:  2014-11-06       Impact factor: 4.475

View more
  4 in total

1.  Davida's deficits: weak encoding of impoverished stimuli or faulty egocentric representation?

Authors:  Dina V Popovkina; Anitha Pasupathy
Journal:  Cogn Neuropsychol       Date:  2022-06-08       Impact factor: 3.750

2.  Feature blindness: A challenge for understanding and modelling visual object recognition.

Authors:  Gaurav Malhotra; Marin Dujmović; Jeffrey S Bowers
Journal:  PLoS Comput Biol       Date:  2022-05-13       Impact factor: 4.779

3.  Performance vs. competence in human-machine comparisons.

Authors:  Chaz Firestone
Journal:  Proc Natl Acad Sci U S A       Date:  2020-10-13       Impact factor: 11.205

4.  Five points to check when comparing visual perception in humans and machines.

Authors:  Christina M Funke; Judy Borowski; Karolina Stosio; Wieland Brendel; Thomas S A Wallis; Matthias Bethge
Journal:  J Vis       Date:  2021-03-01       Impact factor: 2.240

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.