Skip to main content

Ansor: Generating high-performance tensor programs for deep learning

Publication ,  Conference
Zheng, L; Jia, C; Sun, M; Wu, Z; Yu, CH; Haj-Ali, A; Wang, Y; Yang, J; Zhuo, D; Sen, K; Gonzalez, JE; Stoica, I
Published in: Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2020
January 1, 2020

High-performance tensor programs are crucial to guarantee efficient execution of deep neural networks. However, obtaining performant tensor programs for different operators on various hardware platforms is notoriously challenging. Currently, deep learning systems rely on vendor-provided kernel libraries or various search strategies to get performant tensor programs. These approaches either require significant engineering effort to develop platform-specific optimization code or fall short of finding high-performance programs due to restricted search space and ineffective exploration strategy. We present Ansor, a tensor program generation framework for deep learning applications. Compared with existing search strategies, Ansor explores many more optimization combinations by sampling programs from a hierarchical representation of the search space. Ansor then fine-tunes the sampled programs with evolutionary search and a learned cost model to identify the best programs. Ansor can find high-performance programs that are outside the search space of existing state-of-the-art approaches. In addition, Ansor utilizes a task scheduler to simultaneously optimize multiple subgraphs in deep neural networks. We show that Ansor improves the execution performance of deep neural networks relative to the state-of-the-art on the Intel CPU, ARM CPU, and NVIDIA GPU by up to 3.8*, 2.6*, and 1.7*, respectively.

Duke Scholars

Published In

Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2020

Publication Date

January 1, 2020

Start / End Page

863 / 879
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Zheng, L., Jia, C., Sun, M., Wu, Z., Yu, C. H., Haj-Ali, A., … Stoica, I. (2020). Ansor: Generating high-performance tensor programs for deep learning. In Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2020 (pp. 863–879).
Zheng, L., C. Jia, M. Sun, Z. Wu, C. H. Yu, A. Haj-Ali, Y. Wang, et al. “Ansor: Generating high-performance tensor programs for deep learning.” In Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2020, 863–79, 2020.
Zheng L, Jia C, Sun M, Wu Z, Yu CH, Haj-Ali A, et al. Ansor: Generating high-performance tensor programs for deep learning. In: Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2020. 2020. p. 863–79.
Zheng, L., et al. “Ansor: Generating high-performance tensor programs for deep learning.” Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2020, 2020, pp. 863–79.
Zheng L, Jia C, Sun M, Wu Z, Yu CH, Haj-Ali A, Wang Y, Yang J, Zhuo D, Sen K, Gonzalez JE, Stoica I. Ansor: Generating high-performance tensor programs for deep learning. Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2020. 2020. p. 863–879.

Published In

Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation, OSDI 2020

Publication Date

January 1, 2020

Start / End Page

863 / 879