Literature DB >> 33894652

Implicit adversarial data augmentation and robustness with Noise-based Learning.

Priyadarshini Panda1, Kaushik Roy2.   

Abstract

We introduce a Noise-based Learning (NoL) approach for training neural networks that are intrinsically robust to adversarial attacks. We find that the learning of random noise introduced with the input with the same loss function used during posterior maximization, improves a model's adversarial resistance. We show that the learnt noise performs implicit adversarial data augmentation boosting a model's adversary generalization capability. We evaluate our approach's efficacy and provide a simplistic visualization tool for understanding adversarial data, using Principal Component Analysis. We conduct comprehensive experiments on prevailing benchmarks such as MNIST, CIFAR10, CIFAR100, Tiny ImageNet and show that our approach performs remarkably well against a wide range of attacks. Furthermore, combining NoL with state-of-the-art defense mechanisms, such as adversarial training, consistently outperforms prior techniques in both white-box and black-box attacks.
Copyright © 2021 Elsevier Ltd. All rights reserved.

Keywords:  Adversarial robustness; Deep learning; Principal Component Analysis

Year:  2021        PMID: 33894652     DOI: 10.1016/j.neunet.2021.04.008

Source DB:  PubMed          Journal:  Neural Netw        ISSN: 0893-6080


  2 in total

1.  Improved Arabic Alphabet Characters Classification Using Convolutional Neural Networks (CNN).

Authors:  Nesrine Wagaa; Hichem Kallel; Nédra Mellouli
Journal:  Comput Intell Neurosci       Date:  2022-01-11

2.  Learning-to-augment strategy using noisy and denoised data: Improving generalizability of deep CNN for the detection of COVID-19 in X-ray images.

Authors:  Mohammad Momeny; Ali Asghar Neshat; Mohammad Arafat Hussain; Solmaz Kia; Mahmoud Marhamati; Ahmad Jahanbakhshi; Ghassan Hamarneh
Journal:  Comput Biol Med       Date:  2021-07-29       Impact factor: 4.589

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.