Learning Sparse Matrix Row Permutations for Efficient SpMM on GPU Architectures

Conference Paper

Achieving peak performance on sparse operations is challenging. The distribution of the non-zero elements and underlying hardware platform affect the execution efficiency. Given the diversity in workloads and architectures, no unique solution always wins. In this paper, we improve SpMM efficiency on GPUS. We propose several simple, but effective, sparse data permutations on the CSR data structure. Picking the right permutation over 1,688 datasets improves performance by 1.4×, on average, compared to plain CSR and 2.6× against NVIDIA cuSPARSE. Furthermore, we propose a set of novel features to describe sparsity patterns and their interactions with the kernel and hardware. Using these features, we develop a predictor to select the best permutation for each matrix. Predicted permutations' average gain achieves 96% of oracle gains.

Full Text

Duke Authors

Cited Authors

  • Mehrabi, A; Lee, D; Chatterjee, N; Sorin, DJ; Lee, BC; O'Connor, M

Published Date

  • March 1, 2021

Published In

  • Proceedings 2021 Ieee International Symposium on Performance Analysis of Systems and Software, Ispass 2021

Start / End Page

  • 48 - 58

International Standard Book Number 13 (ISBN-13)

  • 9781728186436

Digital Object Identifier (DOI)

  • 10.1109/ISPASS51385.2021.00016

Citation Source

  • Scopus