Skip to main content

HyPar: Towards hybrid parallelism for deep learning accelerator array

Publication ,  Conference
Song, L; Mao, J; Zhuo, Y; Qian, X; Li, H; Chen, Y
Published in: Proceedings - 25th IEEE International Symposium on High Performance Computer Architecture, HPCA 2019
March 26, 2019

With the rise of artificial intelligence in recent years, Deep Neural Networks (DNNs) have been widely used in many domains. To achieve high performance and energy efficiency, hardware acceleration (especially inference) of DNNs is intensively studied both in academia and industry. However, we still face two challenges: Large DNN models and datasets, which incur frequent off-chip memory accesses; and the training of DNNs, which is not well-explored in recent accelerator designs. To truly provide high throughput and energy efficient acceleration for the training of deep and large models, we inevitably need to use multiple accelerators to explore the coarse-grain parallelism, compared to the fine-grain parallelism inside a layer considered in most of the existing architectures. It poses the key research question to seek the best organization of computation and dataflow among accelerators. In this paper, we propose a solution HYPAR to determine layer-wise parallelism for deep neural network training with an array of DNN accelerators. HYPAR partitions the feature map tensors (input and output), the kernel tensors, the gradient tensors, and the error tensors for the DNN accelerators. A partition constitutes the choice of parallelism for weighted layers. The optimization target is to search a partition that minimizes the total communication during training a complete DNN. To solve this problem, we propose a communication model to explain the source and amount of communications. Then, we use a hierarchical layer-wise dynamic programming method to search for the partition for each layer. HYPAR is practical: The time complexity for the partition search in HYPAR is linear. We apply this method in an HMC-based DNN training architecture to minimize data movement. We evaluate HYPAR with ten DNN models from classic Lenet to large-size model VGGs, and the number of weighted layers of these models range from four to nineteen. Our evaluation finds that: The default Model Parallelism is indeed the worst; the default Data Parallelism is not the best; but hybrid parallelism can be better than either the default Data Parallelism or Model Parallelism in DNN training with an array of accelerators. Our evaluation shows that HYPAR achieves a performance gain of 3.39× and an energy efficiency gain of 1.51× compared to Data Parallelism on average, and HYPAR performs up to 2.40× better than "one weird trick".

Duke Scholars

Altmetric Attention Stats
Dimensions Citation Stats

Published In

Proceedings - 25th IEEE International Symposium on High Performance Computer Architecture, HPCA 2019

DOI

Publication Date

March 26, 2019

Start / End Page

56 / 68
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Song, L., Mao, J., Zhuo, Y., Qian, X., Li, H., & Chen, Y. (2019). HyPar: Towards hybrid parallelism for deep learning accelerator array. In Proceedings - 25th IEEE International Symposium on High Performance Computer Architecture, HPCA 2019 (pp. 56–68). https://doi.org/10.1109/HPCA.2019.00027
Song, L., J. Mao, Y. Zhuo, X. Qian, H. Li, and Y. Chen. “HyPar: Towards hybrid parallelism for deep learning accelerator array.” In Proceedings - 25th IEEE International Symposium on High Performance Computer Architecture, HPCA 2019, 56–68, 2019. https://doi.org/10.1109/HPCA.2019.00027.
Song L, Mao J, Zhuo Y, Qian X, Li H, Chen Y. HyPar: Towards hybrid parallelism for deep learning accelerator array. In: Proceedings - 25th IEEE International Symposium on High Performance Computer Architecture, HPCA 2019. 2019. p. 56–68.
Song, L., et al. “HyPar: Towards hybrid parallelism for deep learning accelerator array.” Proceedings - 25th IEEE International Symposium on High Performance Computer Architecture, HPCA 2019, 2019, pp. 56–68. Scopus, doi:10.1109/HPCA.2019.00027.
Song L, Mao J, Zhuo Y, Qian X, Li H, Chen Y. HyPar: Towards hybrid parallelism for deep learning accelerator array. Proceedings - 25th IEEE International Symposium on High Performance Computer Architecture, HPCA 2019. 2019. p. 56–68.

Published In

Proceedings - 25th IEEE International Symposium on High Performance Computer Architecture, HPCA 2019

DOI

Publication Date

March 26, 2019

Start / End Page

56 / 68