Skip to main content

TernGrad: Ternary gradients to reduce communication in distributed deep learning

Publication ,  Conference
Wen, W; Xu, C; Yan, F; Wu, C; Wang, Y; Chen, Y; Li, H
Published in: Advances in Neural Information Processing Systems
January 1, 2017

High network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary gradients to accelerate distributed deep learning in data parallelism. Our approach requires only three numerical levels {-1, 0, 1}, which can aggressively reduce the communication time. We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients. Guided by the bound, we propose layer-wise ternarizing and gradient clipping to improve its convergence. Our experiments show that applying TernGrad on AlexNet doesn't incur any accuracy loss and can even improve accuracy. The accuracy loss of GoogLeNet induced by TernGrad is less than 2% on average. Finally, a performance model is proposed to study the scalability of TernGrad. Experiments show significant speed gains for various deep neural networks. Our source code is available[1].

Duke Scholars

Published In

Advances in Neural Information Processing Systems

ISSN

1049-5258

Publication Date

January 1, 2017

Volume

2017-December

Start / End Page

1510 / 1520

Related Subject Headings

  • 4611 Machine learning
  • 1702 Cognitive Sciences
  • 1701 Psychology
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Wen, W., Xu, C., Yan, F., Wu, C., Wang, Y., Chen, Y., & Li, H. (2017). TernGrad: Ternary gradients to reduce communication in distributed deep learning. In Advances in Neural Information Processing Systems (Vol. 2017-December, pp. 1510–1520).
Wen, W., C. Xu, F. Yan, C. Wu, Y. Wang, Y. Chen, and H. Li. “TernGrad: Ternary gradients to reduce communication in distributed deep learning.” In Advances in Neural Information Processing Systems, 2017-December:1510–20, 2017.
Wen W, Xu C, Yan F, Wu C, Wang Y, Chen Y, et al. TernGrad: Ternary gradients to reduce communication in distributed deep learning. In: Advances in Neural Information Processing Systems. 2017. p. 1510–20.
Wen, W., et al. “TernGrad: Ternary gradients to reduce communication in distributed deep learning.” Advances in Neural Information Processing Systems, vol. 2017-December, 2017, pp. 1510–20.
Wen W, Xu C, Yan F, Wu C, Wang Y, Chen Y, Li H. TernGrad: Ternary gradients to reduce communication in distributed deep learning. Advances in Neural Information Processing Systems. 2017. p. 1510–1520.

Published In

Advances in Neural Information Processing Systems

ISSN

1049-5258

Publication Date

January 1, 2017

Volume

2017-December

Start / End Page

1510 / 1520

Related Subject Headings

  • 4611 Machine learning
  • 1702 Cognitive Sciences
  • 1701 Psychology