Skip to main content

AsySQN: Faster Vertical Federated Learning Algorithms with Better Computation Resource Utilization

Publication ,  Conference
Zhang, Q; Gu, B; Deng, C; Gu, S; Bo, L; Pei, J; Huang, H
Published in: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
August 14, 2021

Vertical federated learning (VFL) is an effective paradigm of training the emerging cross-organizational (e.g., different corporations, companies and organizations) collaborative learning with privacy preserving. Stochastic gradient descent (SGD) methods are the popular choices for training VFL models because of the low per-iteration computation. However, existing SGD-based VFL algorithms are communication-expensive due to a large number of communication rounds. Meanwhile, most existing VFL algorithms use synchronous computation which seriously hamper the computation resource utilization in real-world applications. To address the challenges of communication and computation resource utilization, we propose an asynchronous stochastic quasi-Newton (AsySQN) framework for VFL, under which three algorithms, i.e. AsySQN-SGD, -SVRG and -SAGA, are proposed. The proposed AsySQN-type algorithms making descent steps scaled by approximate (without calculating the inverse Hessian matrix explicitly) Hessian information convergence much faster than SGD-based methods in practice and thus can dramatically reduce the number of communication rounds. Moreover, the adopted asynchronous computation can make better use of the computation resource. We theoretically prove the convergence rates of our proposed algorithms for strongly convex problems. Extensive numerical experiments on real-word datasets demonstrate the lower communication costs and better computation resource utilization of our algorithms compared with state-of-the-art VFL algorithms.

Duke Scholars

Published In

Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining

DOI

ISBN

9781450383325

Publication Date

August 14, 2021

Start / End Page

3917 / 3927
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Zhang, Q., Gu, B., Deng, C., Gu, S., Bo, L., Pei, J., & Huang, H. (2021). AsySQN: Faster Vertical Federated Learning Algorithms with Better Computation Resource Utilization. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 3917–3927). https://doi.org/10.1145/3447548.3467169
Zhang, Q., B. Gu, C. Deng, S. Gu, L. Bo, J. Pei, and H. Huang. “AsySQN: Faster Vertical Federated Learning Algorithms with Better Computation Resource Utilization.” In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 3917–27, 2021. https://doi.org/10.1145/3447548.3467169.
Zhang Q, Gu B, Deng C, Gu S, Bo L, Pei J, et al. AsySQN: Faster Vertical Federated Learning Algorithms with Better Computation Resource Utilization. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2021. p. 3917–27.
Zhang, Q., et al. “AsySQN: Faster Vertical Federated Learning Algorithms with Better Computation Resource Utilization.” Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2021, pp. 3917–27. Scopus, doi:10.1145/3447548.3467169.
Zhang Q, Gu B, Deng C, Gu S, Bo L, Pei J, Huang H. AsySQN: Faster Vertical Federated Learning Algorithms with Better Computation Resource Utilization. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2021. p. 3917–3927.

Published In

Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining

DOI

ISBN

9781450383325

Publication Date

August 14, 2021

Start / End Page

3917 / 3927