STABILITY RESULTS IN LEARNING THEORY

Published

Journal Article

The problem of proving generalization bounds for the performance of learning algorithms can be formulated as a problem of bounding the bias and variance of estimators of the expected error. We show how various stability assumptions can be employed for this purpose. We provide a necessary and sufficient stability condition for bounding the bias and variance for the Empirical Risk Minimization algorithm, and various sufficient conditions for bounding bias and variance of estimators for general algorithms. We discuss settings in which it is possible to obtain exponential bounds, and we prove an extension of the bounded-difference inequality for "almost always" stable algorithms.

Full Text

Cited Authors

  • RAKHLIN, A; MUKHERJEE, S; POGGIO, T

Published Date

  • October 2005

Published In

Volume / Issue

  • 03 / 04

Start / End Page

  • 397 - 417

Published By

Electronic International Standard Serial Number (EISSN)

  • 1793-6861

International Standard Serial Number (ISSN)

  • 0219-5305

Digital Object Identifier (DOI)

  • 10.1142/s0219530505000650

Language

  • en