Journal ArticleTransactions on Machine Learning Research · January 1, 2025
Learning with Errors (LWE) is a hard math problem underlying recently standardized post-quantum cryptography (PQC) systems for key exchange and digital signatures (Chen et al., 2022). Prior work (Wenger et al., 2022; Li et al., 2023a;b) proposed new machin ...
Cite
ConferenceLecture Notes in Computer Science Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics · January 1, 2024
Sparse binary LWE secrets are under consideration for standardization for Homomorphic Encryption and its applications to private computation [20]. Known attacks on sparse binary LWE secrets include the sparse dual attack [5] and the hybrid sparse dual-meet ...
Full textCite
Conference32nd Usenix Security Symposium Usenix Security 2023 · January 1, 2023
Recent text-to-image diffusion models such as MidJourney and Stable Diffusion threaten to displace many in the professional artist community. In particular, models can learn to mimic the artistic style of specific artists after “fine-tuning” on samples of ...
Cite
ConferenceProceedings IEEE Symposium on Security and Privacy · January 1, 2023
The rapid adoption of facial recognition (FR) technology by both government and commercial entities in recent years has raised concerns about civil liberties and privacy. In response, a broad suite of so-called "anti-facial recognition"(AFR) tools has been ...
Full textCite
ConferenceAdvances in Neural Information Processing Systems · January 1, 2023
Learning with Errors (LWE) is a hard math problem used in post-quantum cryptography. Homomorphic Encryption (HE) schemes rely on the hardness of the LWE problem for their security, and two LWE-based cryptosystems were recently standardized by NIST for digi ...
Cite
ConferenceProceedings of the ACM Conference on Computer and Communications Security · November 7, 2022
Server breaches are an unfortunate reality on today's Internet. In the context of deep neural network (DNN) models, they are particularly harmful, because a leaked model gives an attacker "white-box'' access to generate adversarial examples, a threat model ...
Full textCite
Chapter · January 4, 2022
Data-driven business models such as recommender systems (Netflix, Pandora) and targeted advertising platforms (Facebook, Google) heavily rely on consumer data and information about individual behavior patterns and preferences. In this work, we look at usin ...
Full textCite
ConferenceAdvances in Neural Information Processing Systems · January 1, 2022
Currently deployed public-key cryptosystems will be vulnerable to attacks by full-scale quantum computers. Consequently, “quantum resistant” cryptosystems are in high demand, and lattice-based cryptosystems, based on a hard problem known as Learning With E ...
Cite
ConferenceAdvances in Neural Information Processing Systems · January 1, 2022
Extensive literature on backdoor poison attacks has studied attacks and defenses for backdoors using “digital trigger patterns.” In contrast, “physical backdoors” use physical objects as triggers, have only recently been identified, and are qualitatively d ...
Cite
ConferenceProceedings of the 31st Usenix Security Symposium Security 2022 · January 1, 2022
Deep learning systems are known to be vulnerable to adversarial examples. In particular, query-based black-box attacks do not require knowledge of the deep learning model, but can compute adversarial examples over the network by submitting queries and insp ...
Cite
ConferenceCcs 2023 Proceedings of the 2023 ACM Sigsac Conference on Computer and Communications Security · November 21, 2021
Learning With Errors (LWE) is a hard math problem underpinning many proposed post-quantum cryptographic (PQC) systems. The only PQC Key Exchange Mechanism (KEM) standardized by NIST [13] is based on module LWE, and current publicly available PQ Homomorphic ...
Full textCite
ConferenceProceedings of the ACM Conference on Computer and Communications Security · November 13, 2021
Advances in deep learning have introduced a new wave of voice synthesis tools, capable of producing audio that sounds as if spoken by a target speaker. If successful, such tools in the wrong hands will enable a range of powerful attacks against both humans ...
Full textCite
ConferenceProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition · January 1, 2021
Backdoor attacks embed hidden malicious behaviors into deep learning models, which only activate and cause misclassifications on model inputs containing a specific “trigger.” Existing works on backdoor attacks and defenses, however, mostly focus on digital ...
Full textCite
ConferenceProceedings of the ACM Conference on Computer and Communications Security · October 30, 2020
Deep neural networks (DNN) are known to be vulnerable to adversarial attacks. Numerous efforts either try to patch weaknesses in trained models, or try to make it difficult or costly to compute adversarial examples that exploit them. In our work, we explor ...
Full textCite
ConferenceProceedings of the 29th Usenix Security Symposium · January 1, 2020
Today's proliferation of powerful facial recognition systems poses a real threat to personal privacy. As Clearview.ai demonstrated, anyone can canvas the Internet for data and train highly accurate facial recognition models of individuals without their kno ...
Cite