Skip to main content

BSQ: EXPLORING BIT-LEVEL SPARSITY FOR MIXED-PRECISION NEURAL NETWORK QUANTIZATION

Publication ,  Conference
Yang, H; Duan, L; Chen, Y; Li, H
Published in: ICLR 2021 - 9th International Conference on Learning Representations
January 1, 2021

Mixed-precision quantization can potentially achieve the optimal tradeoff between performance and compression rate of deep neural networks, and thus, have been widely investigated. However, it lacks a systematic method to determine the exact quantization scheme. Previous methods either examine only a small manually-designed search space or utilize a cumbersome neural architecture search to explore the vast search space. These approaches cannot lead to an optimal quantization scheme efficiently. This work proposes bit-level sparsity quantization (BSQ) to tackle the mixed-precision quantization from a new angle of inducing bit-level sparsity. We consider each bit of quantized weights as an independent trainable variable and introduce a differentiable bit-sparsity regularizer. BSQ can induce all-zero bits across a group of weight elements and realize the dynamic precision reduction, leading to a mixed-precision quantization scheme of the original model. Our method enables the exploration of the full mixed-precision space with a single gradient-based optimization process, with only one hyperparameter to tradeoff the performance and compression. BSQ achieves both higher accuracy and higher bit reduction on various model architectures on the CIFAR-10 and ImageNet datasets comparing to previous methods.

Duke Scholars

Published In

ICLR 2021 - 9th International Conference on Learning Representations

Publication Date

January 1, 2021
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Yang, H., Duan, L., Chen, Y., & Li, H. (2021). BSQ: EXPLORING BIT-LEVEL SPARSITY FOR MIXED-PRECISION NEURAL NETWORK QUANTIZATION. In ICLR 2021 - 9th International Conference on Learning Representations.
Yang, H., L. Duan, Y. Chen, and H. Li. “BSQ: EXPLORING BIT-LEVEL SPARSITY FOR MIXED-PRECISION NEURAL NETWORK QUANTIZATION.” In ICLR 2021 - 9th International Conference on Learning Representations, 2021.
Yang H, Duan L, Chen Y, Li H. BSQ: EXPLORING BIT-LEVEL SPARSITY FOR MIXED-PRECISION NEURAL NETWORK QUANTIZATION. In: ICLR 2021 - 9th International Conference on Learning Representations. 2021.
Yang, H., et al. “BSQ: EXPLORING BIT-LEVEL SPARSITY FOR MIXED-PRECISION NEURAL NETWORK QUANTIZATION.” ICLR 2021 - 9th International Conference on Learning Representations, 2021.
Yang H, Duan L, Chen Y, Li H. BSQ: EXPLORING BIT-LEVEL SPARSITY FOR MIXED-PRECISION NEURAL NETWORK QUANTIZATION. ICLR 2021 - 9th International Conference on Learning Representations. 2021.

Published In

ICLR 2021 - 9th International Conference on Learning Representations

Publication Date

January 1, 2021