Skip to main content

FPGA acceleration of recurrent neural network based language model

Publication ,  Conference
Li, S; Wu, C; Li, H; Li, B; Wang, Y; Qiu, Q
Published in: Proceedings - 2015 IEEE 23rd Annual International Symposium on Field-Programmable Custom Computing Machines, FCCM 2015
July 15, 2015

Recurrent neural network (RNN) based language model (RNNLM) is a biologically inspired model for natural language processing. It records the historical information through additional recurrent connections and therefore is very effective in capturing semantics of sentences. However, the use of RNNLM has been greatly hindered for the high computation cost in training. This work presents an FPGA implementation framework for RNNLM training acceleration. At architectural level, we improve the parallelism of RNN training scheme and reduce the computing resource requirement for computation efficiency enhancement. The hardware implementation primarily targets at reducing data communication load. A multi-thread based computation engine is utilized which can successfully mask the long memory latency and reuse frequent accessed data. The evaluation based on the Microsoft Research Sentence Completion Challenge shows that the proposed FPGA implementation outperforms traditional class-based modest-size recurrent networks and obtains 46.2% in training accuracy. Moreover, experiments at different network sizes demonstrate a great scalability of the proposed framework.

Duke Scholars

Altmetric Attention Stats
Dimensions Citation Stats

Published In

Proceedings - 2015 IEEE 23rd Annual International Symposium on Field-Programmable Custom Computing Machines, FCCM 2015

DOI

Publication Date

July 15, 2015

Start / End Page

111 / 118
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Li, S., Wu, C., Li, H., Li, B., Wang, Y., & Qiu, Q. (2015). FPGA acceleration of recurrent neural network based language model. In Proceedings - 2015 IEEE 23rd Annual International Symposium on Field-Programmable Custom Computing Machines, FCCM 2015 (pp. 111–118). https://doi.org/10.1109/FCCM.2015.50
Li, S., C. Wu, H. Li, B. Li, Y. Wang, and Q. Qiu. “FPGA acceleration of recurrent neural network based language model.” In Proceedings - 2015 IEEE 23rd Annual International Symposium on Field-Programmable Custom Computing Machines, FCCM 2015, 111–18, 2015. https://doi.org/10.1109/FCCM.2015.50.
Li S, Wu C, Li H, Li B, Wang Y, Qiu Q. FPGA acceleration of recurrent neural network based language model. In: Proceedings - 2015 IEEE 23rd Annual International Symposium on Field-Programmable Custom Computing Machines, FCCM 2015. 2015. p. 111–8.
Li, S., et al. “FPGA acceleration of recurrent neural network based language model.” Proceedings - 2015 IEEE 23rd Annual International Symposium on Field-Programmable Custom Computing Machines, FCCM 2015, 2015, pp. 111–18. Scopus, doi:10.1109/FCCM.2015.50.
Li S, Wu C, Li H, Li B, Wang Y, Qiu Q. FPGA acceleration of recurrent neural network based language model. Proceedings - 2015 IEEE 23rd Annual International Symposium on Field-Programmable Custom Computing Machines, FCCM 2015. 2015. p. 111–118.

Published In

Proceedings - 2015 IEEE 23rd Annual International Symposium on Field-Programmable Custom Computing Machines, FCCM 2015

DOI

Publication Date

July 15, 2015

Start / End Page

111 / 118