On Statistical Efficiency in Learning

Journal Article (Journal Article)

A central issue of many statistical learning problems is to select an appropriate model from a set of candidate models. Large models tend to inflate the variance (or overfitting), while small models tend to cause biases (or underfitting) for a given fixed dataset. In this work, we address the critical challenge of model selection to strike a balance between model fitting and model complexity, thus gaining reliable predictive power. We consider the task of approaching the theoretical limit of statistical learning, meaning that the selected model has the predictive performance that is as good as the best possible model given a class of potentially misspecified candidate models. We propose a generalized notion of Takeuchi's information criterion and prove that the proposed method can asymptotically achieve the optimal out-sample prediction loss under reasonable assumptions. It is the first proof of the asymptotic property of Takeuchi's information criterion to our best knowledge. Our proof applies to a wide variety of nonlinear models, loss functions, and high dimensionality (in the sense that the models' complexity can grow with sample size). The proposed method can be used as a computationally efficient surrogate for leave-one-out cross-validation. Moreover, for modeling streaming data, we propose an online algorithm that sequentially expands the model complexity to enhance selection stability and reduce computation cost. Experimental studies show that the proposed method has desirable predictive power and significantly less computational cost than some popular methods.

Full Text

Duke Authors

Cited Authors

  • Ding, J; Diao, E; Zhou, J; Tarokh, V

Published Date

  • April 1, 2021

Published In

Volume / Issue

  • 67 / 4

Start / End Page

  • 2488 - 2506

Electronic International Standard Serial Number (EISSN)

  • 1557-9654

International Standard Serial Number (ISSN)

  • 0018-9448

Digital Object Identifier (DOI)

  • 10.1109/TIT.2020.3047620

Citation Source

  • Scopus