ReBoc: Accelerating Block-Circulant Neural Networks in ReRAM

Published

Conference Paper

© 2020 EDAA. Deep neural networks (DNNs) emerge as a key component in various applications. However, the ever-growing DNN size hinders efficient processing on hardware. To tackle this problem, on the algorithmic side, compressed DNN models are explored, of which block-circulant DNN models are memory efficient and hardware-friendly; on the hardware side, resistive random-access memory (ReRAM) based accelerators are promising for in-situ processing of DNNs. In this work, we design an accelerator named ReBoc for accelerating block-circulant DNNs in ReRAM to reap the benefits of light-weight models and efficient in-situ processing simultaneously. We propose a novel mapping scheme which utilizes Horizontal Weight Slicing and Intra-Crossbar Weight Duplication to map block-circulant DNN models onto ReRAM crossbars with significant improved crossbar utilization. Moreover, two specific techniques, namely Input Slice Reusing and Input Tile Sharing are introduced to take advantage of the circulant calculation feature in block- circulant DNNs to reduce data access and buffer size. In REBOC, a DNN model is executed within an intra-layer processing pipeline and achieves respectively 96× and 8.86× power efficiency improvement compared to the state-of-the-art FPGA and ASIC accelerators for block-circulant neural networks. Compared to ReRAM-based DNN accelerators, REBOC achieves averagely 4.1× speedup and 2.6× energy reduction.

Full Text

Duke Authors

Cited Authors

  • Wang, Y; Chen, F; Song, L; Richard Shi, CJ; Li, HH; Chen, Y

Published Date

  • March 1, 2020

Published In

  • Proceedings of the 2020 Design, Automation and Test in Europe Conference and Exhibition, Date 2020

Start / End Page

  • 1472 - 1477

International Standard Book Number 13 (ISBN-13)

  • 9783981926347

Digital Object Identifier (DOI)

  • 10.23919/DATE48585.2020.9116422

Citation Source

  • Scopus