Enhancing Image Classification Robustness through Adversarial Sampling with Delta Data Augmentation (DDA)
Academic Article in Scopus
-
- Overview
-
- Identity
-
- Additional document info
-
- View All
-
Overview
abstract
-
Deep learning models are susceptible to adversarial attacks, highlighting the critical need for enhanced adversarial robustness. Recent studies have shown that minor alterations to the input can significantly affect the model's prediction accuracy, making it prone to such attacks. In our study, we present the Delta Data Augmentation (DDA) technique, a novel approach to improving transfer adversarial robustness by using perturbations derived from models trained to resist adversarial threats. Unlike conventional methods that attack the model directly, our approach sources adversarial perturbations from higher-level tasks and integrates them into the training of new tasks. This strategy aims to increase both the robustness and the adversarial diversity of the datasets. Through extensive empirical testing, we showcase the superiority of our data augmentation strategy over existing leading methods in enhancing adversarial robustness. This is particularly evident in our evaluations using Projected Gradient Descent (PGD) attacks with l2 and l¿ norms on datasets such as CIFAR10, CIFAR100, SVHN, MNIST, and FashionMNIST. © 2024 IEEE.
status
publication date
published in
Identity
Digital Object Identifier (DOI)
Additional document info
has global citation frequency
start page
end page