Skip to main content

WAFFLe: Weight Anonymized Factorization for Federated Learning

Publication ,  Journal Article
Hao, W; Mehta, N; Liang, KJ; Cheng, P; El-Khamy, M; Carin, L
Published in: IEEE Access
January 1, 2022

In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices. In light of this need, federated learning has emerged as a popular training paradigm. However, many federated learning approaches trade transmitting data for communicating updated weight parameters for each local device. Therefore, a successful breach that would have otherwise directly compromised the data instead grants whitebox access to the local model, which opens the door to a number of attacks, including exposing the very data federated learning seeks to protect. Additionally, in distributed scenarios, individual client devices commonly exhibit high statistical heterogeneity. Many common federated approaches learn a single global model; while this may do well on average, performance degrades when the i.i.d. assumption is violated, underfitting individuals further from the mean and raising questions of fairness. To address these issues, we propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks. Experiments on MNIST, FashionMNIST, and CIFAR-10 demonstrate WAFFLe's significant improvement to local test performance and fairness while simultaneously providing an extra layer of security.

Duke Scholars

Published In

IEEE Access

DOI

EISSN

2169-3536

Publication Date

January 1, 2022

Volume

10

Start / End Page

49207 / 49218

Related Subject Headings

  • 46 Information and computing sciences
  • 40 Engineering
  • 10 Technology
  • 09 Engineering
  • 08 Information and Computing Sciences
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Hao, W., Mehta, N., Liang, K. J., Cheng, P., El-Khamy, M., & Carin, L. (2022). WAFFLe: Weight Anonymized Factorization for Federated Learning. IEEE Access, 10, 49207–49218. https://doi.org/10.1109/ACCESS.2022.3172945
Hao, W., N. Mehta, K. J. Liang, P. Cheng, M. El-Khamy, and L. Carin. “WAFFLe: Weight Anonymized Factorization for Federated Learning.” IEEE Access 10 (January 1, 2022): 49207–18. https://doi.org/10.1109/ACCESS.2022.3172945.
Hao W, Mehta N, Liang KJ, Cheng P, El-Khamy M, Carin L. WAFFLe: Weight Anonymized Factorization for Federated Learning. IEEE Access. 2022 Jan 1;10:49207–18.
Hao, W., et al. “WAFFLe: Weight Anonymized Factorization for Federated Learning.” IEEE Access, vol. 10, Jan. 2022, pp. 49207–18. Scopus, doi:10.1109/ACCESS.2022.3172945.
Hao W, Mehta N, Liang KJ, Cheng P, El-Khamy M, Carin L. WAFFLe: Weight Anonymized Factorization for Federated Learning. IEEE Access. 2022 Jan 1;10:49207–49218.

Published In

IEEE Access

DOI

EISSN

2169-3536

Publication Date

January 1, 2022

Volume

10

Start / End Page

49207 / 49218

Related Subject Headings

  • 46 Information and computing sciences
  • 40 Engineering
  • 10 Technology
  • 09 Engineering
  • 08 Information and Computing Sciences