Literature DB >> 30561358

Adversarial Examples for Hamming Space Search.

Erkun Yang, Tongliang Liu, Cheng Deng, Dacheng Tao.   

Abstract

Due to its strong representation learning ability and its facilitation of joint learning for representation and hash codes, deep learning-to-hash has achieved promising results and is becoming increasingly popular for the large-scale approximate nearest neighbor search. However, recent studies highlight the vulnerability of deep image classifiers to adversarial examples; this also introduces profound security concerns for deep retrieval systems. Accordingly, in order to study the robustness of modern deep hashing models to adversarial perturbations, we propose hash adversary generation (HAG), a novel method of crafting adversarial examples for Hamming space search. The main goal of HAG is to generate imperceptibly perturbed examples as queries, whose nearest neighbors from a targeted hashing model are semantically irrelevant to the original queries. Extensive experiments prove that HAG can successfully craft adversarial examples with small perturbations to mislead targeted hashing models. The transferability of these perturbations under a variety of settings is also verified. Moreover, by combining heterogeneous perturbations, we further provide a simple yet effective method of constructing adversarial examples for black-box attacks.

Year:  2018        PMID: 30561358     DOI: 10.1109/TCYB.2018.2882908

Source DB:  PubMed          Journal:  IEEE Trans Cybern        ISSN: 2168-2267            Impact factor:   11.448


  1 in total

1.  Quadruplet-Based Deep Cross-Modal Hashing.

Authors:  Huan Liu; Jiang Xiong; Nian Zhang; Fuming Liu; Xitao Zou
Journal:  Comput Intell Neurosci       Date:  2021-07-02
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.