Efficient Weight Pruning for Optical Neural Networks: When Pruned Weights are Non-Zeros
Optical neural networks (ONNs) have emerged as a promising solution for energy-efficient deep learning. However, their resource-intensive manufacturing process necessitates efficient methods to streamline ONN architectures without sacrificing their performances. Weight pruning presents a potential remedy. Unlike the conventional neural networks, the pruned weights in ONNs are not necessarily zero in general, thereby making most traditional pruning methods inefficient. In this paper, we propose a novel two-stage pruning method tailored for ONNs. In the first stage, a first-order Taylor expansion of the loss function is applied to effectively identify and prune unimportant weights. To determine the shared value for the pruned weights, a novel optimization method is developed. In the second stage, fine-tuning is further applied to adjust the unpruned weights alongside the shared value of pruned weights. Experimental results on multiple public datasets demonstrate the efficacy of our proposed approach. It achieves superior model compression with minimum loss in accuracy over other conventional pruning techniques.
Duke Scholars
Published In
DOI
EISSN
Publication Date
Related Subject Headings
- 4611 Machine learning
- 4603 Computer vision and multimedia computation
Citation
Published In
DOI
EISSN
Publication Date
Related Subject Headings
- 4611 Machine learning
- 4603 Computer vision and multimedia computation