Skip to main content

Privacy Leakage of Adversarial Training Models in Federated Learning Systems

Publication ,  Conference
Zhang, J; Chen, Y; Li, H
Published in: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
January 1, 2022

Adversarial Training (AT) is crucial for obtaining deep neural networks that are robust to adversarial attacks, yet recent works found that it could also make models more vulnerable to privacy attacks. In this work, we further reveal this unsettling property of AT by designing a novel privacy attack that is practically applicable to the privacy-sensitive Federated Learning (FL) systems. Using our method, the attacker can exploit AT models in the FL system to accurately reconstruct users' private training images even when the training batch size is large. Code is available at https://github.com/zjysteven/PrivayAttack_AT_FL.

Duke Scholars

Published In

IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

DOI

EISSN

2160-7516

ISSN

2160-7508

ISBN

9781665487399

Publication Date

January 1, 2022

Volume

2022-June

Start / End Page

107 / 113
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Zhang, J., Chen, Y., & Li, H. (2022). Privacy Leakage of Adversarial Training Models in Federated Learning Systems. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (Vol. 2022-June, pp. 107–113). https://doi.org/10.1109/CVPRW56347.2022.00021
Zhang, J., Y. Chen, and H. Li. “Privacy Leakage of Adversarial Training Models in Federated Learning Systems.” In IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2022-June:107–13, 2022. https://doi.org/10.1109/CVPRW56347.2022.00021.
Zhang J, Chen Y, Li H. Privacy Leakage of Adversarial Training Models in Federated Learning Systems. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. 2022. p. 107–13.
Zhang, J., et al. “Privacy Leakage of Adversarial Training Models in Federated Learning Systems.” IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, vol. 2022-June, 2022, pp. 107–13. Scopus, doi:10.1109/CVPRW56347.2022.00021.
Zhang J, Chen Y, Li H. Privacy Leakage of Adversarial Training Models in Federated Learning Systems. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. 2022. p. 107–113.

Published In

IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

DOI

EISSN

2160-7516

ISSN

2160-7508

ISBN

9781665487399

Publication Date

January 1, 2022

Volume

2022-June

Start / End Page

107 / 113