An MDL framework for sparse coding and dictionary learning

The power of sparse signal modeling with learned overcomplete dictionaries has been demonstrated in a variety of applications and fields, from signal processing to statistical inference and machine learning. However, the statistical properties of these models, such as underfitting or overfitting given sets of data, are still not well characterized in the literature. As a result, the success of sparse modeling depends on hand-tuning critical parameters for each data and application. This work aims at addressing this by providing a practical and objective characterization of sparse models by means of the minimum description length (MDL) principlea well-established information-theoretic approach to model selection in statistical inference. The resulting framework derives a family of efficient sparse coding and dictionary learning algorithms which, by virtue of the MDL principle, are completely parameter free. Furthermore, such framework allows to incorporate additional prior information to existing models, such as Markovian dependencies, or to define completely new problem formulations, including in the matrix analysis area, in a natural way. These virtues will be demonstrated with parameter-free algorithms for the classic image denoising and classification problems, and for low-rank matrix recovery in video applications. However, the framework is not limited to this imaging data, and can be applied to a wide range of signal and data types and tasks. © 2012 IEEE.

Full Text

Duke Authors

Cited Authors

  • Ramirez, I; Sapiro, G

Published Date

  • 2012

Published In

Volume / Issue

  • 60 / 6

Start / End Page

  • 2913 - 2927

International Standard Serial Number (ISSN)

  • 1053-587X

Digital Object Identifier (DOI)

  • 10.1109/TSP.2012.2187203

Citation Source

  • SciVal