Skip to main content

Phi: Leveraging Pattern-based Hierarchical Sparsity for High-Efficiency Spiking Neural Networks

Publication ,  Conference
Wei, C; Duan, B; Guo, C; Zhang, J; Song, Q; Li, H; Chen, Y
Published in: Proceedings International Symposium on Computer Architecture
June 21, 2025

Spiking Neural Networks (SNNs) are gaining attention for their energy efficiency and biological plausibility, utilizing 0-1 activation sparsity through spike-driven computation. While existing SNN accelerators exploit this sparsity to skip zero computations, they often overlook the unique distribution patterns inherent in binary activations. In this work, we observe that particular patterns exist in spike activations, which we can utilize to reduce the substantial computation of SNN models. Based on these findings, we propose a novel pattern-based hierarchical sparsity framework, termed Phi, to optimize computation. Phi introduces a two-level sparsity hierarchy: Level 1 exhibits vector-wise sparsity by representing activations with pre-defined patterns, allowing for offline pre-computation with weights and significantly reducing most runtime computation. Level 2 features element-wise sparsity by complementing the Level 1 matrix, using a highly sparse matrix to further reduce computation while maintaining accuracy. We present an algorithm-hardware co-design approach. Algorithmically, we employ a k-means-based pattern selection method to identify representative patterns and introduce a pattern-aware fine-tuning technique to enhance Level 2 sparsity. Architecturally, we design Phi, a dedicated hardware architecture that efficiently processes the two levels of Phi sparsity on the fly. Extensive experiments demonstrate that Phi achieves a 3.45× speedup and a 4.93× improvement in energy efficiency compared to stateof- the-art SNN accelerators, showcasing the effectiveness of our framework in optimizing SNN computation.

Duke Scholars

Published In

Proceedings International Symposium on Computer Architecture

DOI

EISSN

2575-713X

ISSN

1063-6897

Publication Date

June 21, 2025

Start / End Page

930 / 943
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Wei, C., Duan, B., Guo, C., Zhang, J., Song, Q., Li, H., & Chen, Y. (2025). Phi: Leveraging Pattern-based Hierarchical Sparsity for High-Efficiency Spiking Neural Networks. In Proceedings International Symposium on Computer Architecture (pp. 930–943). https://doi.org/10.1145/3695053.3731035
Wei, C., B. Duan, C. Guo, J. Zhang, Q. Song, H. Li, and Y. Chen. “Phi: Leveraging Pattern-based Hierarchical Sparsity for High-Efficiency Spiking Neural Networks.” In Proceedings International Symposium on Computer Architecture, 930–43, 2025. https://doi.org/10.1145/3695053.3731035.
Wei C, Duan B, Guo C, Zhang J, Song Q, Li H, et al. Phi: Leveraging Pattern-based Hierarchical Sparsity for High-Efficiency Spiking Neural Networks. In: Proceedings International Symposium on Computer Architecture. 2025. p. 930–43.
Wei, C., et al. “Phi: Leveraging Pattern-based Hierarchical Sparsity for High-Efficiency Spiking Neural Networks.” Proceedings International Symposium on Computer Architecture, 2025, pp. 930–43. Scopus, doi:10.1145/3695053.3731035.
Wei C, Duan B, Guo C, Zhang J, Song Q, Li H, Chen Y. Phi: Leveraging Pattern-based Hierarchical Sparsity for High-Efficiency Spiking Neural Networks. Proceedings International Symposium on Computer Architecture. 2025. p. 930–943.

Published In

Proceedings International Symposium on Computer Architecture

DOI

EISSN

2575-713X

ISSN

1063-6897

Publication Date

June 21, 2025

Start / End Page

930 / 943