Skip to main content

Trained Rank Pruning for Efficient Deep Neural Networks

Publication ,  Conference
Xu, Y; Li, Y; Zhang, S; Wen, W; Wang, B; Dai, W; Qi, Y; Chen, Y; Lin, W; Xiong, H
Published in: Proceedings - 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing, EMC2-NIPS 2019
December 1, 2019

To accelerate DNNs inference, low-rank approximation has been widely adopted because of its solid theoretical rationale and efficient implementations. Several previous works attempted to directly approximate a pre-trained model by low-rank decomposition; however, small approximation errors in parameters can ripple over a large prediction loss. Apparently, it is not optimal to separate low-rank approximation from training. Unlike previous works, this paper integrates low rank approximation and regularization into the training process. We propose Trained Rank Pruning (TRP), which alternates between low rank approximation and training. TRP maintains the capacity of the original network while imposing low-rank constraints during training. A nuclear regularization optimized by stochastic sub-gradient descent is utilized to further promote low rank in TRP. Networks trained with TRP has a low-rank structure in nature, and is approximated with negligible performance loss, thus eliminating fine-tuning after low rank approximation. The proposed method is comprehensively evaluated on CIFAR-10 and ImageNet, outperforming previous compression counterparts using low rank approximation. Our code is available at: Https://github.com/yuhuixu1993/Trained-Rank-Pruning.

Duke Scholars

Published In

Proceedings - 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing, EMC2-NIPS 2019

DOI

Publication Date

December 1, 2019

Start / End Page

14 / 17
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Xu, Y., Li, Y., Zhang, S., Wen, W., Wang, B., Dai, W., … Xiong, H. (2019). Trained Rank Pruning for Efficient Deep Neural Networks. In Proceedings - 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing, EMC2-NIPS 2019 (pp. 14–17). https://doi.org/10.1109/EMC2-NIPS53020.2019.00011
Xu, Y., Y. Li, S. Zhang, W. Wen, B. Wang, W. Dai, Y. Qi, Y. Chen, W. Lin, and H. Xiong. “Trained Rank Pruning for Efficient Deep Neural Networks.” In Proceedings - 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing, EMC2-NIPS 2019, 14–17, 2019. https://doi.org/10.1109/EMC2-NIPS53020.2019.00011.
Xu Y, Li Y, Zhang S, Wen W, Wang B, Dai W, et al. Trained Rank Pruning for Efficient Deep Neural Networks. In: Proceedings - 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing, EMC2-NIPS 2019. 2019. p. 14–7.
Xu, Y., et al. “Trained Rank Pruning for Efficient Deep Neural Networks.” Proceedings - 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing, EMC2-NIPS 2019, 2019, pp. 14–17. Scopus, doi:10.1109/EMC2-NIPS53020.2019.00011.
Xu Y, Li Y, Zhang S, Wen W, Wang B, Dai W, Qi Y, Chen Y, Lin W, Xiong H. Trained Rank Pruning for Efficient Deep Neural Networks. Proceedings - 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing, EMC2-NIPS 2019. 2019. p. 14–17.

Published In

Proceedings - 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing, EMC2-NIPS 2019

DOI

Publication Date

December 1, 2019

Start / End Page

14 / 17