Skip to main content

Fawkes: Protecting privacy against unauthorized deep learning models

Publication ,  Conference
Shan, S; Wenger, E; Zhang, J; Li, H; Zheng, H; Zhao, BY
Published in: Proceedings of the 29th USENIX Security Symposium
January 1, 2020

Today's proliferation of powerful facial recognition systems poses a real threat to personal privacy. As Clearview.ai demonstrated, anyone can canvas the Internet for data and train highly accurate facial recognition models of individuals without their knowledge. We need tools to protect ourselves from potential misuses of unauthorized facial recognition systems. Unfortunately, no practical or effective solutions exist. In this paper, we propose Fawkes, a system that helps individuals inoculate their images against unauthorized facial recognition models. Fawkes achieves this by helping users add imperceptible pixel-level changes (we call them “cloaks”) to their own photos before releasing them. When used to train facial recognition models, these “cloaked” images produce functional models that consistently cause normal images of the user to be misidentified. We experimentally demonstrate that Fawkes provides 95+% protection against user recognition regardless of how trackers train their models. Even when clean, uncloaked images are “leaked” to the tracker and used for training, Fawkes can still maintain an 80+% protection success rate. We achieve 100% success in experiments against today's state-of-the-art facial recognition services. Finally, we show that Fawkes is robust against a variety of countermeasures that try to detect or disrupt image cloaks.

Duke Scholars

Published In

Proceedings of the 29th USENIX Security Symposium

Publication Date

January 1, 2020

Start / End Page

1589 / 1604
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Shan, S., Wenger, E., Zhang, J., Li, H., Zheng, H., & Zhao, B. Y. (2020). Fawkes: Protecting privacy against unauthorized deep learning models. In Proceedings of the 29th USENIX Security Symposium (pp. 1589–1604).
Shan, S., E. Wenger, J. Zhang, H. Li, H. Zheng, and B. Y. Zhao. “Fawkes: Protecting privacy against unauthorized deep learning models.” In Proceedings of the 29th USENIX Security Symposium, 1589–1604, 2020.
Shan S, Wenger E, Zhang J, Li H, Zheng H, Zhao BY. Fawkes: Protecting privacy against unauthorized deep learning models. In: Proceedings of the 29th USENIX Security Symposium. 2020. p. 1589–604.
Shan, S., et al. “Fawkes: Protecting privacy against unauthorized deep learning models.” Proceedings of the 29th USENIX Security Symposium, 2020, pp. 1589–604.
Shan S, Wenger E, Zhang J, Li H, Zheng H, Zhao BY. Fawkes: Protecting privacy against unauthorized deep learning models. Proceedings of the 29th USENIX Security Symposium. 2020. p. 1589–1604.

Published In

Proceedings of the 29th USENIX Security Symposium

Publication Date

January 1, 2020

Start / End Page

1589 / 1604