Skip to main content

Backdoor Attacks Against Deep Learning Systems in the Physical World

Publication ,  Conference
Wenger, E; Passananti, J; Bhagoji, AN; Yao, Y; Zheng, H; Zhao, BY
Published in: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
January 1, 2021

Backdoor attacks embed hidden malicious behaviors into deep learning models, which only activate and cause misclassifications on model inputs containing a specific “trigger.” Existing works on backdoor attacks and defenses, however, mostly focus on digital attacks that apply digitally generated patterns as triggers. A critical question remains unanswered: “can backdoor attacks succeed using physical objects as triggers, thus making them a credible threat against deep learning systems in the real world?” We conduct a detailed empirical study to explore this question for facial recognition, a critical deep learning task. Using 7 physical objects as triggers, we collect a custom dataset of 3205 images of 10 volunteers and use it to study the feasibility of “physical” backdoor attacks under a variety of real-world conditions. Our study reveals two key findings. First, physical backdoor attacks can be highly successful if they are carefully configured to overcome the constraints imposed by physical objects. In particular, the placement of successful triggers is largely constrained by the target model's dependence on key facial features. Second, four of today's state-of-the-art defenses against (digital) backdoors are ineffective against physical backdoors, because the use of physical objects breaks core assumptions used to construct these defenses. Our study confirms that (physical) backdoor attacks are not a hypothetical phenomenon but rather pose a serious real-world threat to critical classification tasks. We need new and more robust defenses against backdoors in the physical world.

Duke Scholars

Published In

Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

DOI

ISSN

1063-6919

Publication Date

January 1, 2021

Start / End Page

6202 / 6211
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Wenger, E., Passananti, J., Bhagoji, A. N., Yao, Y., Zheng, H., & Zhao, B. Y. (2021). Backdoor Attacks Against Deep Learning Systems in the Physical World. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 6202–6211). https://doi.org/10.1109/CVPR46437.2021.00614
Wenger, E., J. Passananti, A. N. Bhagoji, Y. Yao, H. Zheng, and B. Y. Zhao. “Backdoor Attacks Against Deep Learning Systems in the Physical World.” In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 6202–11, 2021. https://doi.org/10.1109/CVPR46437.2021.00614.
Wenger E, Passananti J, Bhagoji AN, Yao Y, Zheng H, Zhao BY. Backdoor Attacks Against Deep Learning Systems in the Physical World. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2021. p. 6202–11.
Wenger, E., et al. “Backdoor Attacks Against Deep Learning Systems in the Physical World.” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2021, pp. 6202–11. Scopus, doi:10.1109/CVPR46437.2021.00614.
Wenger E, Passananti J, Bhagoji AN, Yao Y, Zheng H, Zhao BY. Backdoor Attacks Against Deep Learning Systems in the Physical World. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2021. p. 6202–6211.

Published In

Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

DOI

ISSN

1063-6919

Publication Date

January 1, 2021

Start / End Page

6202 / 6211