Skip to main content

Learning low-rank deep neural networks via singular vector orthogonality regularization and singular value sparsification

Publication ,  Conference
Yang, H; Tang, M; Wen, W; Yan, F; Hu, D; Li, A; Li, H; Chen, Y
Published in: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
June 1, 2020

Modern deep neural networks (DNNs) often require high memory consumption and large computational loads. In order to deploy DNN algorithms efficiently on edge or mobile devices, a series of DNN compression algorithms have been explored, including factorization methods. Factorization methods approximate the weight matrix of a DNN layer with the multiplication of two or multiple low-rank matrices. However, it is hard to measure the ranks of DNN layers during the training process. Previous works mainly induce low-rank through implicit approximations or via costly singular value decomposition (SVD) process on every training step. The former approach usually induces a high accuracy loss while the latter has a low efficiency. In this work, we propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step. SVD training first decomposes each layer into the form of its full-rank SVD, then performs training directly on the decomposed weights. We add orthogonality regularization to the singular vectors, which ensure the valid form of SVD and avoid gradient vanishing/exploding. Low-rank is encouraged by applying sparsity-inducing regularizers on the singular values of each layer. Singular value pruning is applied at the end to explicitly reach a low-rank model. We empirically show that SVD training can significantly reduce the rank of DNN layers and achieve higher reduction on computation load under the same accuracy, comparing to not only previous factorization methods but also state-of-the-art filter pruning methods.

Duke Scholars

Published In

IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

DOI

EISSN

2160-7516

ISSN

2160-7508

ISBN

9781728193601

Publication Date

June 1, 2020

Volume

2020-June

Start / End Page

2899 / 2908
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Yang, H., Tang, M., Wen, W., Yan, F., Hu, D., Li, A., … Chen, Y. (2020). Learning low-rank deep neural networks via singular vector orthogonality regularization and singular value sparsification. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (Vol. 2020-June, pp. 2899–2908). https://doi.org/10.1109/CVPRW50498.2020.00347
Yang, H., M. Tang, W. Wen, F. Yan, D. Hu, A. Li, H. Li, and Y. Chen. “Learning low-rank deep neural networks via singular vector orthogonality regularization and singular value sparsification.” In IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2020-June:2899–2908, 2020. https://doi.org/10.1109/CVPRW50498.2020.00347.
Yang H, Tang M, Wen W, Yan F, Hu D, Li A, et al. Learning low-rank deep neural networks via singular vector orthogonality regularization and singular value sparsification. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. 2020. p. 2899–908.
Yang, H., et al. “Learning low-rank deep neural networks via singular vector orthogonality regularization and singular value sparsification.” IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, vol. 2020-June, 2020, pp. 2899–908. Scopus, doi:10.1109/CVPRW50498.2020.00347.
Yang H, Tang M, Wen W, Yan F, Hu D, Li A, Li H, Chen Y. Learning low-rank deep neural networks via singular vector orthogonality regularization and singular value sparsification. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. 2020. p. 2899–2908.

Published In

IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

DOI

EISSN

2160-7516

ISSN

2160-7508

ISBN

9781728193601

Publication Date

June 1, 2020

Volume

2020-June

Start / End Page

2899 / 2908