Skip to main content

Prosperity: Accelerating Spiking Neural Networks via Product Sparsity

Publication ,  Conference
Wei, C; Guo, C; Cheng, F; Li, S; Yang, HF; Li, HH; Chen, Y
Published in: Proceedings International Symposium on High Performance Computer Architecture
January 1, 2025

Spiking Neural Networks (SNNs) are highly efficient due to their spike-based activation, which inherently produces bit-sparse computation patterns. Existing hardware implementations of SNNs leverage this sparsity pattern to avoid wasteful zero-value computations, yet this approach fails to fully capitalize on the potential efficiency of SNNs. This study introduces a novel sparsity paradigm called Product Sparsity, which leverages combinatorial similarities within matrix multiplication operations to reuse the inner product result and reduce redundant computations. Product Sparsity significantly enhances sparsity in SNNs without compromising the original computation results compared to traditional bit sparsity methods. For instance, in the SpikeBERT SNN model, Product Sparsity achieves a density of only 1.23% and reduces computation by 11 ×, compared to bit sparsity, which has a density of 13.19%. To efficiently implement Product Sparsity, we propose Prosperity, an architecture that addresses the challenges of identifying and eliminating redundant computations in real-time. Compared to prior SNN accelerator PTB and the A100 GPU, Prosperity achieves an average speedup of 7.4 × and 1.8 ×, respectively, along with energy efficiency improvements of 8.0 × and 193 ×, respectively. The code for Prosperity is available at https://github.com/dubcyfor3/Prosperity.

Duke Scholars

Published In

Proceedings International Symposium on High Performance Computer Architecture

DOI

ISSN

1530-0897

Publication Date

January 1, 2025

Start / End Page

806 / 820
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Wei, C., Guo, C., Cheng, F., Li, S., Yang, H. F., Li, H. H., & Chen, Y. (2025). Prosperity: Accelerating Spiking Neural Networks via Product Sparsity. In Proceedings International Symposium on High Performance Computer Architecture (pp. 806–820). https://doi.org/10.1109/HPCA61900.2025.00066
Wei, C., C. Guo, F. Cheng, S. Li, H. F. Yang, H. H. Li, and Y. Chen. “Prosperity: Accelerating Spiking Neural Networks via Product Sparsity.” In Proceedings International Symposium on High Performance Computer Architecture, 806–20, 2025. https://doi.org/10.1109/HPCA61900.2025.00066.
Wei C, Guo C, Cheng F, Li S, Yang HF, Li HH, et al. Prosperity: Accelerating Spiking Neural Networks via Product Sparsity. In: Proceedings International Symposium on High Performance Computer Architecture. 2025. p. 806–20.
Wei, C., et al. “Prosperity: Accelerating Spiking Neural Networks via Product Sparsity.” Proceedings International Symposium on High Performance Computer Architecture, 2025, pp. 806–20. Scopus, doi:10.1109/HPCA61900.2025.00066.
Wei C, Guo C, Cheng F, Li S, Yang HF, Li HH, Chen Y. Prosperity: Accelerating Spiking Neural Networks via Product Sparsity. Proceedings International Symposium on High Performance Computer Architecture. 2025. p. 806–820.

Published In

Proceedings International Symposium on High Performance Computer Architecture

DOI

ISSN

1530-0897

Publication Date

January 1, 2025

Start / End Page

806 / 820