Skip to main content

A general framework for adversarial examples with objectives

Publication ,  Journal Article
Sharif, M; Bhagavatula, S; Bauer, L; Reiter, MK
Published in: ACM Transactions on Privacy and Security
June 10, 2019

Images perturbed subtly to be misclassified by neural networks, called adversarial examples, have emerged as a technically deep challenge and an important concern for several application domains. Most research on adversarial examples takes as its only constraint that the perturbed images are similar to the originals. However, real-world application of these ideas often requires the examples to satisfy additional objectives, which are typically enforced through custom modifications of the perturbation process. In this article, we propose adversarial generative nets (AGNs), a general methodology to train a generator neural network to emit adversarial examples satisfying desired objectives. We demonstrate the ability of AGNs to accommodate a wide range of objectives, including imprecise ones difficult to model, in two application domains. In particular, we demonstrate physical adversarial examples—eyeglass frames designed to fool face recognition—with better robustness, inconspicuousness, and scalability than previous approaches, as well as a new attack to fool a handwritten-digit classifier.

Duke Scholars

Altmetric Attention Stats
Dimensions Citation Stats

Published In

ACM Transactions on Privacy and Security

DOI

EISSN

2471-2574

ISSN

2471-2566

Publication Date

June 10, 2019

Volume

22

Issue

3
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Sharif, M., Bhagavatula, S., Bauer, L., & Reiter, M. K. (2019). A general framework for adversarial examples with objectives. ACM Transactions on Privacy and Security, 22(3). https://doi.org/10.1145/3317611
Sharif, M., S. Bhagavatula, L. Bauer, and M. K. Reiter. “A general framework for adversarial examples with objectives.” ACM Transactions on Privacy and Security 22, no. 3 (June 10, 2019). https://doi.org/10.1145/3317611.
Sharif M, Bhagavatula S, Bauer L, Reiter MK. A general framework for adversarial examples with objectives. ACM Transactions on Privacy and Security. 2019 Jun 10;22(3).
Sharif, M., et al. “A general framework for adversarial examples with objectives.” ACM Transactions on Privacy and Security, vol. 22, no. 3, June 2019. Scopus, doi:10.1145/3317611.
Sharif M, Bhagavatula S, Bauer L, Reiter MK. A general framework for adversarial examples with objectives. ACM Transactions on Privacy and Security. 2019 Jun 10;22(3).

Published In

ACM Transactions on Privacy and Security

DOI

EISSN

2471-2574

ISSN

2471-2566

Publication Date

June 10, 2019

Volume

22

Issue

3