Skip to main content

Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks

Publication ,  Journal Article
Lin, W; Lucas, K; Bauer, L; Reiter, MK; Sharif, M
December 28, 2021

We propose new, more efficient targeted white-box attacks against deep neural networks. Our attacks better align with the attacker's goal: (1) tricking a model to assign higher probability to the target class than to any other class, while (2) staying within an $\epsilon$-distance of the attacked input. First, we demonstrate a loss function that explicitly encodes (1) and show that Auto-PGD finds more attacks with it. Second, we propose a new attack method, Constrained Gradient Descent (CGD), using a refinement of our loss function that captures both (1) and (2). CGD seeks to satisfy both attacker objectives -- misclassification and bounded $\ell_{p}$-norm -- in a principled manner, as part of the optimization, instead of via ad hoc post-processing techniques (e.g., projection or clipping). We show that CGD is more successful on CIFAR10 (0.9--4.2%) and ImageNet (8.6--13.6%) than state-of-the-art attacks while consuming less time (11.4--18.8%). Statistical tests confirm that our attack outperforms others against leading defenses on different datasets and values of $\epsilon$.

Duke Scholars

Publication Date

December 28, 2021
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Lin, W., Lucas, K., Bauer, L., Reiter, M. K., & Sharif, M. (2021). Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks.
Lin, Weiran, Keane Lucas, Lujo Bauer, Michael K. Reiter, and Mahmood Sharif. “Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks,” December 28, 2021.

Publication Date

December 28, 2021