Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning
Publication
, Preprint
Zhang, Y; Gong, N; Reiter, MK
May 9, 2024
Duke Scholars
Publication Date
May 9, 2024
Citation
APA
Chicago
ICMJE
MLA
NLM
Zhang, Y., Gong, N., & Reiter, M. K. (2024). Concealing Backdoor Model Updates in Federated Learning by
Trigger-Optimized Data Poisoning.
Zhang, Yujie, Neil Gong, and Michael K. Reiter. “Concealing Backdoor Model Updates in Federated Learning by
Trigger-Optimized Data Poisoning,” May 9, 2024.
Zhang Y, Gong N, Reiter MK. Concealing Backdoor Model Updates in Federated Learning by
Trigger-Optimized Data Poisoning. 2024.
Zhang, Yujie, et al. Concealing Backdoor Model Updates in Federated Learning by
Trigger-Optimized Data Poisoning. 9 May 2024.
Zhang Y, Gong N, Reiter MK. Concealing Backdoor Model Updates in Federated Learning by
Trigger-Optimized Data Poisoning. 2024.
Publication Date
May 9, 2024