Learning Sparse Matrix Row Permutations for Efficient SpMM on GPU Architectures
Achieving peak performance on sparse operations is challenging. The distribution of the non-zero elements and underlying hardware platform affect the execution efficiency. Given the diversity in workloads and architectures, no unique solution always wins. In this paper, we improve SpMM efficiency on GPUS. We propose several simple, but effective, sparse data permutations on the CSR data structure. Picking the right permutation over 1,688 datasets improves performance by 1.4×, on average, compared to plain CSR and 2.6× against NVIDIA cuSPARSE. Furthermore, we propose a set of novel features to describe sparsity patterns and their interactions with the kernel and hardware. Using these features, we develop a predictor to select the best permutation for each matrix. Predicted permutations' average gain achieves 96% of oracle gains.
Mehrabi, A; Lee, D; Chatterjee, N; Sorin, DJ; Lee, BC; O'Connor, M
Proceedings 2021 Ieee International Symposium on Performance Analysis of Systems and Software, Ispass 2021
Start / End Page
International Standard Book Number 13 (ISBN-13)
Digital Object Identifier (DOI)