Skip to main content

ESCALATE: Boosting the efficiency of sparse CNN accelerator with kernel decomposition

Publication ,  Conference
Li, S; Hanson, E; Qian, X; Li, HH; Chen, Y
Published in: Proceedings of the Annual International Symposium on Microarchitecture, MICRO
October 18, 2021

The ever-growing parameter size and computation cost of Convolutional Neural Network (CNN) models hinder their deployment onto resource-constrained platforms. Network pruning techniques are proposed to remove the redundancy in CNN parameters and produce a sparse model. Sparse-aware accelerators are also proposed to reduce the computation cost and memory bandwidth requirements of inference by leveraging the model sparsity. The irregularity of sparse patterns, however, limits the efficiency of those designs. Researchers proposed to address this issue by creating a regular sparsity pattern through hardware-aware pruning algorithms. However, the pruning rate of these solutions is largely limited by the enforced sparsity patterns. This limitation motivates us to explore other compression methods beyond pruning. With two decoupled computation stages, we found that kernel decomposition could potentially take the processing of the sparse pattern off from the critical path of inference and achieve a high compression ratio without enforcing the sparse patterns. To exploit these advantages, we propose ESCALATE , an algorithm-hardware co-design approach based on kernel decomposition. At algorithm level, ESCALATE reorganizes the two computation stages of the decomposed convolution to enable a stream processing of the intermediate feature map. We proposed a hybrid quantization to exploit the different reuse frequency of each part of the decomposed weight. At architecture level, ESCALATE proposes a novel 'Basis-First' dataflow and its corresponding microarchitecture design to maximize the benefits brought by the decomposed convolution. We evaluate ESCALATE with four representative CNN models on both CIFAR-10 and ImageNet datasets and compare it against previous sparse accelerators and pruning algorithms. Results show that ESCALATE can achieve up to 325× and 11× compression ratio for models on CIFAR-10 and ImageNet, respectively. Comparing with previous dense and sparse accelerators, ESCALATE accelerator averagely boosts the energy efficiency by 8.3× and 3.77×, and reduces the latency by 17.9× and 2.16×, respectively.

Duke Scholars

Published In

Proceedings of the Annual International Symposium on Microarchitecture, MICRO

DOI

ISSN

1072-4451

ISBN

9781450385572

Publication Date

October 18, 2021

Start / End Page

992 / 1004
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Li, S., Hanson, E., Qian, X., Li, H. H., & Chen, Y. (2021). ESCALATE: Boosting the efficiency of sparse CNN accelerator with kernel decomposition. In Proceedings of the Annual International Symposium on Microarchitecture, MICRO (pp. 992–1004). https://doi.org/10.1145/3466752.3480043
Li, S., E. Hanson, X. Qian, H. H. Li, and Y. Chen. “ESCALATE: Boosting the efficiency of sparse CNN accelerator with kernel decomposition.” In Proceedings of the Annual International Symposium on Microarchitecture, MICRO, 992–1004, 2021. https://doi.org/10.1145/3466752.3480043.
Li S, Hanson E, Qian X, Li HH, Chen Y. ESCALATE: Boosting the efficiency of sparse CNN accelerator with kernel decomposition. In: Proceedings of the Annual International Symposium on Microarchitecture, MICRO. 2021. p. 992–1004.
Li, S., et al. “ESCALATE: Boosting the efficiency of sparse CNN accelerator with kernel decomposition.” Proceedings of the Annual International Symposium on Microarchitecture, MICRO, 2021, pp. 992–1004. Scopus, doi:10.1145/3466752.3480043.
Li S, Hanson E, Qian X, Li HH, Chen Y. ESCALATE: Boosting the efficiency of sparse CNN accelerator with kernel decomposition. Proceedings of the Annual International Symposium on Microarchitecture, MICRO. 2021. p. 992–1004.

Published In

Proceedings of the Annual International Symposium on Microarchitecture, MICRO

DOI

ISSN

1072-4451

ISBN

9781450385572

Publication Date

October 18, 2021

Start / End Page

992 / 1004