HERO: Hessian-Enhanced Robust Optimization for Unifying and Improving Generalization and Quantization Performance

Conference Paper

With the recent demand of deploying neural network models on mobile and edge devices, it is desired to improve the model's generalizability on unseen testing data, as well as enhance the model's robustness under fixed-point quantization for efficient deployment. Minimizing the training loss, however, provides few guarantees on the generalization and quantization performance. In this work, we fulfill the need of improving generalization and quantization performance simultaneously by theoretically unifying them under the framework of improving the model's robustness against bounded weight perturbation and minimizing the eigenvalues of the Hessian matrix with respect to model weights. We therefore propose HERO, a Hessian-enhanced robust optimization method, to minimize the Hessian eigenvalues through a gradient-based training process, simultaneously improving the generalization and quantization performance. HERO enables up to a 3.8% gain on test accuracy, up to 30% higher accuracy under 80% training label perturbation, and the best post-training quantization accuracy across a wide range of precision, including a > 10% accuracy improvement over SGD-trained models for common model architectures on various datasets.

Full Text

Duke Authors

Cited Authors

  • Yang, H; Yang, X; Gong, NZ; Chen, Y

Published Date

  • July 10, 2022

Published In

Start / End Page

  • 25 - 30

International Standard Serial Number (ISSN)

  • 0738-100X

International Standard Book Number 13 (ISBN-13)

  • 9781450391429

Digital Object Identifier (DOI)

  • 10.1145/3489517.3530678

Citation Source

  • Scopus