| Literature DB >> 32876562 |
Marin Dujmović1, Gaurav Malhotra1, Jeffrey S Bowers1.
Abstract
Deep convolutional neural networks (DCNNs) are frequently described as the best current models of human and primate vision. An obvious challenge to this claim is the existence of adversarial images that fool DCNNs but are uninterpretable to humans. However, recent research has suggested that there may be similarities in how humans and DCNNs interpret these seemingly nonsense images. We reanalysed data from a high-profile paper and conducted five experiments controlling for different ways in which these images can be generated and selected. We show human-DCNN agreement is much weaker and more variable than previously reported, and that the weak agreement is contingent on the choice of adversarial images and the design of the experiment. Indeed, we find there are well-known methods of generating images for which humans show no agreement with DCNNs. We conclude that adversarial images still pose a challenge to theorists using DCNNs as models of human vision.Entities:
Keywords: adversarial images; deep neural networks; human; human vision; neuroscience
Mesh:
Year: 2020 PMID: 32876562 PMCID: PMC7467732 DOI: 10.7554/eLife.55978
Source DB: PubMed Journal: Elife ISSN: 2050-084X Impact factor: 8.140