Skip to main content

Faster cnns with direct sparse convolutions and guided pruning

Publication ,  Conference
Park, J; Li, H; Li, S; Wen, W; Chen, Y; Tang, PTP; Dubey, P
Published in: 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings
January 1, 2019

© ICLR 2019 - Conference Track Proceedings. All rights reserved. Phenomenally successful in practical inference problems, convolutional neural networks (CNN) are widely deployed in mobile devices, data centers, and even supercomputers. The number of parameters needed in CNNs, however, are often large and undesirable. Consequently, various methods have been developed to prune a CNN once it is trained. Nevertheless, the resulting CNNs offer limited benefits. While pruning the fully connected layers reduces a CNN's size considerably, it does not improve inference speed noticeably as the compute heavy parts lie in convolutions. Pruning CNNs in a way that increase inference speed often imposes specific sparsity structures, thus limiting the achievable sparsity levels. We present a method to realize simultaneously size economy and speed improvement while pruning CNNs. Paramount to our success is an efficient general sparse-with-dense matrix multiplication implementation that is applicable to convolution of feature maps with kernels of arbitrary sparsity patterns. Complementing this, we developed a performance model that predicts sweet spots of sparsity levels for different layers and on different computer architectures. Together, these two allow us to demonstrate 3.1-7.3× convolution speedups over dense convolution in AlexNet, on Intel Atom, Xeon, and Xeon Phi processors, spanning the spectrum from mobile devices to supercomputers.

Duke Scholars

Published In

5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings

Publication Date

January 1, 2019
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Park, J., Li, H., Li, S., Wen, W., Chen, Y., Tang, P. T. P., & Dubey, P. (2019). Faster cnns with direct sparse convolutions and guided pruning. In 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings.
Park, J., H. Li, S. Li, W. Wen, Y. Chen, P. T. P. Tang, and P. Dubey. “Faster cnns with direct sparse convolutions and guided pruning.” In 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings, 2019.
Park J, Li H, Li S, Wen W, Chen Y, Tang PTP, et al. Faster cnns with direct sparse convolutions and guided pruning. In: 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings. 2019.
Park, J., et al. “Faster cnns with direct sparse convolutions and guided pruning.” 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings, 2019.
Park J, Li H, Li S, Wen W, Chen Y, Tang PTP, Dubey P. Faster cnns with direct sparse convolutions and guided pruning. 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings. 2019.

Published In

5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings

Publication Date

January 1, 2019