General conditions for predictivity in learning theory.


Journal Article

Developing theoretical foundations for learning is a key step towards understanding intelligence. 'Learning from examples' is a paradigm in which systems (natural or artificial) learn a functional relationship from a training set of examples. Within this paradigm, a learning algorithm is a map from the space of training sets to the hypothesis space of possible functional solutions. A central question for the theory is to determine conditions under which a learning algorithm will generalize from its finite training set to novel examples. A milestone in learning theory was a characterization of conditions on the hypothesis space that ensure generalization for the natural class of empirical risk minimization (ERM) learning algorithms that are based on minimizing the error on the training set. Here we provide conditions for generalization in terms of a precise stability property of the learning process: when the training set is perturbed by deleting one example, the learned hypothesis does not change much. This stability property stipulates conditions on the learning map rather than on the hypothesis space, subsumes the classical theory for ERM algorithms, and is applicable to more general algorithms. The surprising connection between stability and predictivity has implications for the foundations of learning theory and for the design of novel algorithms, and provides insights into problems as diverse as language learning and inverse problems in physics and engineering.

Full Text

Cited Authors

  • Poggio, T; Rifkin, R; Mukherjee, S; Niyogi, P

Published Date

  • March 2004

Published In

Volume / Issue

  • 428 / 6981

Start / End Page

  • 419 - 422

PubMed ID

  • 15042089

Pubmed Central ID

  • 15042089

Electronic International Standard Serial Number (EISSN)

  • 1476-4687

International Standard Serial Number (ISSN)

  • 0028-0836

Digital Object Identifier (DOI)

  • 10.1038/nature02341


  • eng