Skip to main content

Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks

Publication ,  Conference
Lin, W; Lucas, K; Bauer, L; Reiter, MK; Sharif, M
Published in: Proceedings of Machine Learning Research
January 1, 2022

We propose new, more efficient targeted white-box attacks against deep neural networks. Our attacks better align with the attacker's goal: (1) tricking a model to assign higher probability to the target class than to any other class, while (2) staying within an ε-distance of the attacked input. First, we demonstrate a loss function that explicitly encodes (1) and show that Auto-PGD finds more attacks with it. Second, we propose a new attack method, Constrained Gradient Descent (CGD), using a refinement of our loss function that captures both (1) and (2). CGD seeks to satisfy both attacker objectives-misclassification and bounded ℓp-norm-in a principled manner, as part of the optimization, instead of via ad hoc post-processing techniques (e.g., projection or clipping). We show that CGD is more successful on CIFAR10 (0.9-4.2%) and ImageNet (8.6-13.6%) than state-of-the-art attacks while consuming less time (11.4-18.8%). Statistical tests confirm that our attack outperforms others against leading defenses on different datasets and values of ε.

Duke Scholars

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2022

Volume

162

Start / End Page

13405 / 13430
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Lin, W., Lucas, K., Bauer, L., Reiter, M. K., & Sharif, M. (2022). Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks. In Proceedings of Machine Learning Research (Vol. 162, pp. 13405–13430).
Lin, W., K. Lucas, L. Bauer, M. K. Reiter, and M. Sharif. “Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks.” In Proceedings of Machine Learning Research, 162:13405–30, 2022.
Lin W, Lucas K, Bauer L, Reiter MK, Sharif M. Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks. In: Proceedings of Machine Learning Research. 2022. p. 13405–30.
Lin, W., et al. “Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks.” Proceedings of Machine Learning Research, vol. 162, 2022, pp. 13405–30.
Lin W, Lucas K, Bauer L, Reiter MK, Sharif M. Constrained Gradient Descent: A Powerful and Principled Evasion Attack Against Neural Networks. Proceedings of Machine Learning Research. 2022. p. 13405–13430.

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2022

Volume

162

Start / End Page

13405 / 13430