A rate-distortion framework for supervised learning


Conference Paper

© 2015 IEEE. An information-theoretic framework is presented for bounding the number of samples needed for supervised learning in a parametric Bayesian setting. This framework is inspired by an analogy with rate-distortion theory, which characterizes tradeoffs in the lossy compression of random sources. In a parametric Bayesian environment, the maximum a posteriori classifier can be viewed as a random function of the model parameters. Labeled training data can be viewed as a finite-rate encoding of that source, and the excess loss due to using the learned classifier instead of the MAP classifier can be viewed as distortion. A strict bound on the loss-measured in terms of the expected total variation-is derived, providing a minimum number of training samples needed to drive the expected total variation to within a specified tolerance. The tightness of this bound is demonstrated on the classification of Gaus-sians, for which one can derive closed-form expressions for the bound.

Full Text

Duke Authors

Cited Authors

  • Nokleby, M; Beirami, A; Calderbank, R

Published Date

  • November 10, 2015

Published In

Volume / Issue

  • 2015-November /

Electronic International Standard Serial Number (EISSN)

  • 2161-0371

International Standard Serial Number (ISSN)

  • 2161-0363

International Standard Book Number 13 (ISBN-13)

  • 9781467374545

Digital Object Identifier (DOI)

  • 10.1109/MLSP.2015.7324319

Citation Source

  • Scopus