Skip to main content

Emily Wenger

Cue Family Assistant Professor
Electrical and Computer Engineering

Selected Publications


Salsa Fresca: Angular Embeddings and Pre-Training for ML Attacks on Learning With Errors

Journal Article Transactions on Machine Learning Research · January 1, 2025 Learning with Errors (LWE) is a hard math problem underlying recently standardized post-quantum cryptography (PQC) systems for key exchange and digital signatures (Chen et al., 2022). Prior work (Wenger et al., 2022; Li et al., 2023a;b) proposed new machin ... Cite

The Cool and the Cruel: Separating Hard Parts of LWE Secrets

Conference Lecture Notes in Computer Science Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics · January 1, 2024 Sparse binary LWE secrets are under consideration for standardization for Homomorphic Encryption and its applications to private computation [20]. Known attacks on sparse binary LWE secrets include the sparse dual attack [5] and the hybrid sparse dual-meet ... Full text Cite

Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models

Conference 32nd Usenix Security Symposium Usenix Security 2023 · January 1, 2023 Recent text-to-image diffusion models such as MidJourney and Stable Diffusion threaten to displace many in the professional artist community. In particular, models can learn to mimic the artistic style of specific artists after “fine-tuning” on samples of ... Cite

SoK: Anti-Facial Recognition Technology

Conference Proceedings IEEE Symposium on Security and Privacy · January 1, 2023 The rapid adoption of facial recognition (FR) technology by both government and commercial entities in recent years has raised concerns about civil liberties and privacy. In response, a broad suite of so-called "anti-facial recognition"(AFR) tools has been ... Full text Cite

SALSA VERDE: a machine learning attack on Learning With Errors with sparse small secrets

Conference Advances in Neural Information Processing Systems · January 1, 2023 Learning with Errors (LWE) is a hard math problem used in post-quantum cryptography. Homomorphic Encryption (HE) schemes rely on the hardness of the LWE problem for their security, and two LWE-based cryptosystems were recently standardized by NIST for digi ... Cite

Post-breach Recovery: Protection against White-box Adversarial Examples for Leaked DNN Models

Conference Proceedings of the ACM Conference on Computer and Communications Security · November 7, 2022 Server breaches are an unfortunate reality on today's Internet. In the context of deep neural network (DNN) models, they are particularly harmful, because a leaked model gives an attacker "white-box'' access to generate adversarial examples, a threat model ... Full text Cite

Private movie recommendations for children

Chapter · January 4, 2022 Data-driven business models such as recommender systems (Netflix, Pandora) and targeted advertising platforms (Facebook, Google) heavily rely on consumer data and information about individual behavior patterns and preferences. In this work, we look at usin ... Full text Cite

SALSA: Attacking Lattice Cryptography with Transformers

Conference Advances in Neural Information Processing Systems · January 1, 2022 Currently deployed public-key cryptosystems will be vulnerable to attacks by full-scale quantum computers. Consequently, “quantum resistant” cryptosystems are in high demand, and lattice-based cryptosystems, based on a hard problem known as Learning With E ... Cite

Finding Naturally Occurring Physical Backdoors in Image Datasets

Conference Advances in Neural Information Processing Systems · January 1, 2022 Extensive literature on backdoor poison attacks has studied attacks and defenses for backdoors using “digital trigger patterns.” In contrast, “physical backdoors” use physical objects as triggers, have only recently been identified, and are qualitatively d ... Cite

Blacklight: Scalable Defense for Neural Networks against Query-Based Black-Box Attacks

Conference Proceedings of the 31st Usenix Security Symposium Security 2022 · January 1, 2022 Deep learning systems are known to be vulnerable to adversarial examples. In particular, query-based black-box attacks do not require knowledge of the deep learning model, but can compute adversarial examples over the network by submitting queries and insp ... Cite

Salsa Picante: A Machine Learning Attack On LWE with Binary Secrets

Conference Ccs 2023 Proceedings of the 2023 ACM Sigsac Conference on Computer and Communications Security · November 21, 2021 Learning With Errors (LWE) is a hard math problem underpinning many proposed post-quantum cryptographic (PQC) systems. The only PQC Key Exchange Mechanism (KEM) standardized by NIST [13] is based on module LWE, and current publicly available PQ Homomorphic ... Full text Cite

"hello, It's Me": Deep Learning-based Speech Synthesis Attacks in the Real World

Conference Proceedings of the ACM Conference on Computer and Communications Security · November 13, 2021 Advances in deep learning have introduced a new wave of voice synthesis tools, capable of producing audio that sounds as if spoken by a target speaker. If successful, such tools in the wrong hands will enable a range of powerful attacks against both humans ... Full text Cite

Backdoor Attacks Against Deep Learning Systems in the Physical World

Conference Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition · January 1, 2021 Backdoor attacks embed hidden malicious behaviors into deep learning models, which only activate and cause misclassifications on model inputs containing a specific “trigger.” Existing works on backdoor attacks and defenses, however, mostly focus on digital ... Full text Cite

Gotta Catch'Em All: Using Honeypots to Catch Adversarial Attacks on Neural Networks

Conference Proceedings of the ACM Conference on Computer and Communications Security · October 30, 2020 Deep neural networks (DNN) are known to be vulnerable to adversarial attacks. Numerous efforts either try to patch weaknesses in trained models, or try to make it difficult or costly to compute adversarial examples that exploit them. In our work, we explor ... Full text Cite

Fawkes: Protecting privacy against unauthorized deep learning models

Conference Proceedings of the 29th Usenix Security Symposium · January 1, 2020 Today's proliferation of powerful facial recognition systems poses a real threat to personal privacy. As Clearview.ai demonstrated, anyone can canvas the Internet for data and train highly accurate facial recognition models of individuals without their kno ... Cite