Skip to main content

PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning

Publication ,  Conference
Song, L; Qian, X; Li, H; Chen, Y
Published in: Proceedings - International Symposium on High-Performance Computer Architecture
May 5, 2017

Convolution neural networks (CNNs) are the heart of deep learning applications. Recent works PRIME [1] and ISAAC [2] demonstrated the promise of using resistive random access memory (ReRAM) to perform neural computations in memory. We found that training cannot be efficiently supported with the current schemes. First, they do not consider weight update and complex data dependency in training procedure. Second, ISAAC attempts to increase system throughput with a very deep pipeline. It is only beneficial when a large number of consecutive images can be fed into the architecture. In training, the notion of batch (e.g. 64) limits the number of images can be processed consecutively, because the images in the next batch need to be processed based on the updated weights. Third, the deep pipeline in ISAAC is vulnerable to pipeline bubbles and execution stall. In this paper, we present PipeLayer, a ReRAM-based PIM accelerator for CNNs that support both training and testing. We analyze data dependency and weight update in training algorithms and propose efficient pipeline to exploit inter-layer parallelism. To exploit intra-layer parallelism, we propose highly parallel design based on the notion of parallelism granularity and weight replication. With these design choices, PipeLayer enables the highly pipelined execution of both training and testing, without introducing the potential stalls in previous work. The experiment results show that, PipeLayer achieves the speedups of 42.45x compared with GPU platform on average. The average energy saving of PipeLayer compared with GPU implementation is 7.17x.

Duke Scholars

Altmetric Attention Stats
Dimensions Citation Stats

Published In

Proceedings - International Symposium on High-Performance Computer Architecture

DOI

ISSN

1530-0897

Publication Date

May 5, 2017

Start / End Page

541 / 552
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Song, L., Qian, X., Li, H., & Chen, Y. (2017). PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning. In Proceedings - International Symposium on High-Performance Computer Architecture (pp. 541–552). https://doi.org/10.1109/HPCA.2017.55
Song, L., X. Qian, H. Li, and Y. Chen. “PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning.” In Proceedings - International Symposium on High-Performance Computer Architecture, 541–52, 2017. https://doi.org/10.1109/HPCA.2017.55.
Song L, Qian X, Li H, Chen Y. PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning. In: Proceedings - International Symposium on High-Performance Computer Architecture. 2017. p. 541–52.
Song, L., et al. “PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning.” Proceedings - International Symposium on High-Performance Computer Architecture, 2017, pp. 541–52. Scopus, doi:10.1109/HPCA.2017.55.
Song L, Qian X, Li H, Chen Y. PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning. Proceedings - International Symposium on High-Performance Computer Architecture. 2017. p. 541–552.

Published In

Proceedings - International Symposium on High-Performance Computer Architecture

DOI

ISSN

1530-0897

Publication Date

May 5, 2017

Start / End Page

541 / 552