Generalization error of deep neural networks: Role of classification margin and data structure

Conference Paper

© 2017 IEEE. Understanding the generalization properties of deep learning models is critical for their successful usage in many applications, especially in the regimes where the number of training samples is limited. We study the generalization properties of deep neural networks (DNNs) via the Jacobian matrix of the network. Our analysis is general to arbitrary network structures, types of non-linearities and pooling operations. We show that bounding the spectral norm of the Jacobian matrix in the network reduces the generalization error. In addition, we tie this error to the invariance in the data and the network. Experiments on the MNIST and ImageNet datasets support these findings. This short paper summarizes our generalization error theorems for DNNs and for general invariant classifiers [1], [2] .

Full Text

Duke Authors

Cited Authors

  • Sokolić, J; Giryes, R; Sapiro, G; Rodrigues, MRD

Published Date

  • September 1, 2017

Published In

  • 2017 12th International Conference on Sampling Theory and Applications, SampTA 2017

Start / End Page

  • 147 - 151

International Standard Book Number 13 (ISBN-13)

  • 9781538615652

Digital Object Identifier (DOI)

  • 10.1109/SAMPTA.2017.8024476

Citation Source

  • Scopus