Skip to main content
Journal cover image

Perturbation diversity certificates robust generalization.

Publication ,  Journal Article
Qian, Z; Zhang, S; Huang, K; Wang, Q; Yi, X; Gu, B; Xiong, H
Published in: Neural networks : the official journal of the International Neural Network Society
April 2024

Whilst adversarial training has been proven to be one most effective defending method against adversarial attacks for deep neural networks, it suffers from over-fitting on training adversarial data and thus may not guarantee the robust generalization. This may result from the fact that the conventional adversarial training methods generate adversarial perturbations usually in a supervised way so that the resulting adversarial examples are highly biased towards the decision boundary, leading to an inhomogeneous data distribution. To mitigate this limitation, we propose to generate adversarial examples from a perturbation diversity perspective. Specifically, the generated perturbed samples are not only adversarial but also diverse so as to certify robust generalization and significant robustness improvement through a homogeneous data distribution. We provide theoretical and empirical analysis, establishing a foundation to support the proposed method. As a major contribution, we prove that promoting perturbations diversity can lead to a better robust generalization bound. To verify our methods' effectiveness, we conduct extensive experiments over different datasets (e.g., CIFAR-10, CIFAR-100, SVHN) with different adversarial attacks (e.g., PGD, CW). Experimental results show that our method outperforms other state-of-the-art (e.g., PGD and Feature Scattering) in robust generalization performance.

Duke Scholars

Published In

Neural networks : the official journal of the International Neural Network Society

DOI

EISSN

1879-2782

ISSN

0893-6080

Publication Date

April 2024

Volume

172

Start / End Page

106117

Related Subject Headings

  • Neural Networks, Computer
  • Generalization, Psychological
  • Artificial Intelligence & Image Processing
  • 4905 Statistics
  • 4611 Machine learning
  • 4602 Artificial intelligence
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Qian, Z., Zhang, S., Huang, K., Wang, Q., Yi, X., Gu, B., & Xiong, H. (2024). Perturbation diversity certificates robust generalization. Neural Networks : The Official Journal of the International Neural Network Society, 172, 106117. https://doi.org/10.1016/j.neunet.2024.106117
Qian, Zhuang, Shufei Zhang, Kaizhu Huang, Qiufeng Wang, Xinping Yi, Bin Gu, and Huan Xiong. “Perturbation diversity certificates robust generalization.Neural Networks : The Official Journal of the International Neural Network Society 172 (April 2024): 106117. https://doi.org/10.1016/j.neunet.2024.106117.
Qian Z, Zhang S, Huang K, Wang Q, Yi X, Gu B, et al. Perturbation diversity certificates robust generalization. Neural networks : the official journal of the International Neural Network Society. 2024 Apr;172:106117.
Qian, Zhuang, et al. “Perturbation diversity certificates robust generalization.Neural Networks : The Official Journal of the International Neural Network Society, vol. 172, Apr. 2024, p. 106117. Epmc, doi:10.1016/j.neunet.2024.106117.
Qian Z, Zhang S, Huang K, Wang Q, Yi X, Gu B, Xiong H. Perturbation diversity certificates robust generalization. Neural networks : the official journal of the International Neural Network Society. 2024 Apr;172:106117.
Journal cover image

Published In

Neural networks : the official journal of the International Neural Network Society

DOI

EISSN

1879-2782

ISSN

0893-6080

Publication Date

April 2024

Volume

172

Start / End Page

106117

Related Subject Headings

  • Neural Networks, Computer
  • Generalization, Psychological
  • Artificial Intelligence & Image Processing
  • 4905 Statistics
  • 4611 Machine learning
  • 4602 Artificial intelligence