Literature DB >> 33748635

A System-Driven Taxonomy of Attacks and Defenses in Adversarial Machine Learning.

Koosha Sadeghi1, Ayan Banerjee1, Sandeep K S Gupta1.   

Abstract

Machine Learning (ML) algorithms, specifically supervised learning, are widely used in modern real-world applications, which utilize Computational Intelligence (CI) as their core technology, such as autonomous vehicles, assistive robots, and biometric systems. Attacks that cause misclassifications or mispredictions can lead to erroneous decisions resulting in unreliable operations. Designing robust ML with the ability to provide reliable results in the presence of such attacks has become a top priority in the field of adversarial machine learning. An essential characteristic for rapid development of robust ML is an arms race between attack and defense strategists. However, an important prerequisite for the arms race is access to a well-defined system model so that experiments can be repeated by independent researchers. This paper proposes a fine-grained system-driven taxonomy to specify ML applications and adversarial system models in an unambiguous manner such that independent researchers can replicate experiments and escalate the arms race to develop more evolved and robust ML applications. The paper provides taxonomies for: 1) the dataset, 2) the ML architecture, 3) the adversary's knowledge, capability, and goal, 4) adversary's strategy, and 5) the defense response. In addition, the relationships among these models and taxonomies are analyzed by proposing an adversarial machine learning cycle. The provided models and taxonomies are merged to form a comprehensive system-driven taxonomy, which represents the arms race between the ML applications and adversaries in recent years. The taxonomies encode best practices in the field and help evaluate and compare the contributions of research works and reveals gaps in the field.

Entities:  

Keywords:  Computational intelligence; adversarial machine learning; attack model; defense model; supervised learning

Year:  2020        PMID: 33748635      PMCID: PMC7971418          DOI: 10.1109/tetci.2020.2968933

Source DB:  PubMed          Journal:  IEEE Trans Emerg Top Comput Intell        ISSN: 2471-285X


  9 in total

1.  Improving generalization performance using double backpropagation.

Authors:  H Drucker; Y Le Cun
Journal:  IEEE Trans Neural Netw       Date:  1992

2.  Adversarial Feature Selection Against Evasion Attacks.

Authors:  Fei Zhang; Patrick P K Chan; Battista Biggio; Daniel S Yeung; Fabio Roli
Journal:  IEEE Trans Cybern       Date:  2015-04-21       Impact factor: 11.448

3.  Adversarial Examples: Attacks and Defenses for Deep Learning.

Authors:  Xiaoyong Yuan; Pan He; Qile Zhu; Xiaolin Li
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2019-01-14       Impact factor: 10.451

4.  Adversarial attacks on medical machine learning.

Authors:  Samuel G Finlayson; John D Bowers; Joichi Ito; Jonathan L Zittrain; Andrew L Beam; Isaac S Kohane
Journal:  Science       Date:  2019-03-22       Impact factor: 47.728

5.  Randomized Prediction Games for Adversarial Machine Learning.

Authors:  Samuel Rota Bulo; Battista Biggio; Ignazio Pillai; Marcello Pelillo; Fabio Roli
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2017-11       Impact factor: 10.451

6.  Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition.

Authors:  J Stallkamp; M Schlipsing; J Salmen; C Igel
Journal:  Neural Netw       Date:  2012-02-20

7.  Human-level control through deep reinforcement learning.

Authors:  Volodymyr Mnih; Koray Kavukcuoglu; David Silver; Andrei A Rusu; Joel Veness; Marc G Bellemare; Alex Graves; Martin Riedmiller; Andreas K Fidjeland; Georg Ostrovski; Stig Petersen; Charles Beattie; Amir Sadik; Ioannis Antonoglou; Helen King; Dharshan Kumaran; Daan Wierstra; Shane Legg; Demis Hassabis
Journal:  Nature       Date:  2015-02-26       Impact factor: 49.962

8.  Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning.

Authors:  Hoo-Chang Shin; Holger R Roth; Mingchen Gao; Le Lu; Ziyue Xu; Isabella Nogues; Jianhua Yao; Daniel Mollura; Ronald M Summers
Journal:  IEEE Trans Med Imaging       Date:  2016-02-11       Impact factor: 10.048

9.  Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing.

Authors:  Matthew Fredrikson; Eric Lantz; Somesh Jha; Simon Lin; David Page; Thomas Ristenpart
Journal:  Proc USENIX Secur Symp       Date:  2014-08
  9 in total
  1 in total

1.  Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples.

Authors:  Omer Faruk Tuna; Ferhat Ozgur Catak; M Taner Eskil
Journal:  Multimed Tools Appl       Date:  2022-02-18       Impact factor: 2.577

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.