A Penalized Method for the Predictive Limit of Learning

Conference Paper

Machine learning systems learn from and make predictions by building models from observed data. Because large models tend to overfit while small models tend to underfit for a given fixed dataset, a critical challenge is to select an appropriate model (e.g. set of variables/features). Model selection aims to strike a balance between the goodness of fit and model complexity, and thus to gain reliable predictive power. In this paper, we study a penalized model selection technique that asymptotically achieves the optimal expected prediction loss (referred to as the limit of learning) offered by a set of candidate models. We prove that the proposed procedure is both statistically efficient in the sense that it asymptotically approaches the limit of learning, and computationally efficient in the sense that it can be much faster than cross validation methods. Our theory applies for a wide variety of model classes, loss functions, and high dimensions (in the sense that the models' complexity can grow with data size). We released a python package with our proposed method for general usage like logistic regression and neural networks.

Full Text

Duke Authors

Cited Authors

  • Ding, J; Diao, E; Zhou, J; Tarokh, V

Published Date

  • September 10, 2018

Published In

Volume / Issue

  • 2018-April /

Start / End Page

  • 4414 - 4418

International Standard Serial Number (ISSN)

  • 1520-6149

International Standard Book Number 13 (ISBN-13)

  • 9781538646588

Digital Object Identifier (DOI)

  • 10.1109/ICASSP.2018.8461832

Citation Source

  • Scopus