Admm attack: an enhanced adversarial attack for deep neural networks with undetectable distortions

Adversarial Examples

Abstract

Many recent studies demonstrate that state-of-the-art Deep neural networks (DNNs) might be easily fooled by adversarial examples, generated by adding carefully crafted and visually imperceptible distortions onto original legal inputs through adversarial attacks. Adversarial examples can lead the DNN to misclassify them as any target labels. In the literature, various methods are proposed to minimize the different lp norms of the distortion. However, there lacks a versatile framework for all types of adversarial attacks. To achieve a better understanding for the security properties of DNNs, we propose a general framework for constructing adversarial examples by leveraging Alternating Direction Method of Multipliers (ADMM) to split the optimization approach for effective minimization of various lp norms of the distortion, including l0, l1, l2, and l∞ norms. Thus, the proposed general framework unifies the methods of crafting l0, l1, l2, and l∞ attacks. The experimental results demonstrate that the proposed ADMM attacks achieve both the high attack success rate and the minimal distortion for the misclassification compared with state-of-the-art attack methods.

Publication
In Proceedings of the 24th Asia and South Pacific Design Automation Conference
Pu Zhao
Pu Zhao
Research Assistant Professor

Related