Skip to main content

Exploring Bit-Slice Sparsity in Deep Neural Networks for Efficient ReRAM-Based Deployment

Publication ,  Conference
Zhang, J; Yang, H; Chen, F; Wang, Y; Li, H
Published in: Proceedings - 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing, EMC2-NIPS 2019
December 1, 2019

Emerging resistive random-access memory (ReRAM) has recently been intensively investigated to accelerate the processing of deep neural networks (DNNs). Due to the in-situ computation capability, analog ReRAM crossbars yield significant throughput improvement and energy reduction compared to traditional digital methods. However, the power hungry analog-to-digital converters (ADCs) prevent the practical deployment of ReRAM-based DNN accelerators on end devices with limited chip area and power budget. We observe that due to the limited bit-density of ReRAM cells, DNN weights are bit sliced and correspondingly stored on multiple ReRAM bitlines. The accumulated current on bitlines resulted by weights directly dictates the overhead of ADCs. As such, bitwise weight sparsity rather than the sparsity of the full weight, is desirable for efficient ReRAM deployment. In this work, we propose bit-slice ℓ1, the first algorithm to induce bit-slice sparsity during the training of dynamic fixed-point DNNs. Experiment results show that our approach achieves 2× sparsity improvement compared to previous algorithms. The resulting sparsity allows the ADC resolution to be reduced to 1-bit of the most significant bit-slice and down to 3-bit for the others bits, which significantly speeds up processing and reduces power and area overhead.

Duke Scholars

Published In

Proceedings - 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing, EMC2-NIPS 2019

DOI

ISBN

9781665424189

Publication Date

December 1, 2019

Start / End Page

1 / 5
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Zhang, J., Yang, H., Chen, F., Wang, Y., & Li, H. (2019). Exploring Bit-Slice Sparsity in Deep Neural Networks for Efficient ReRAM-Based Deployment. In Proceedings - 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing, EMC2-NIPS 2019 (pp. 1–5). https://doi.org/10.1109/EMC2-NIPS53020.2019.00008
Zhang, J., H. Yang, F. Chen, Y. Wang, and H. Li. “Exploring Bit-Slice Sparsity in Deep Neural Networks for Efficient ReRAM-Based Deployment.” In Proceedings - 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing, EMC2-NIPS 2019, 1–5, 2019. https://doi.org/10.1109/EMC2-NIPS53020.2019.00008.
Zhang J, Yang H, Chen F, Wang Y, Li H. Exploring Bit-Slice Sparsity in Deep Neural Networks for Efficient ReRAM-Based Deployment. In: Proceedings - 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing, EMC2-NIPS 2019. 2019. p. 1–5.
Zhang, J., et al. “Exploring Bit-Slice Sparsity in Deep Neural Networks for Efficient ReRAM-Based Deployment.” Proceedings - 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing, EMC2-NIPS 2019, 2019, pp. 1–5. Scopus, doi:10.1109/EMC2-NIPS53020.2019.00008.
Zhang J, Yang H, Chen F, Wang Y, Li H. Exploring Bit-Slice Sparsity in Deep Neural Networks for Efficient ReRAM-Based Deployment. Proceedings - 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing, EMC2-NIPS 2019. 2019. p. 1–5.

Published In

Proceedings - 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing, EMC2-NIPS 2019

DOI

ISBN

9781665424189

Publication Date

December 1, 2019

Start / End Page

1 / 5