Universal priors for sparse modeling

Journal Article

Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. In this work, we use tools from information theory to propose a sparsity regularization term which has several theoretical and practical advantages over the more standard ℓ0 or ℓ1 ones, and which leads to improved coding performance and accuracy in reconstruction tasks. We also briefly report on further improvements obtained by imposing low mutual coherence and Gram matrix norm on the learned dictionaries. © 2009 IEEE.

Full Text

Duke Authors

Cited Authors

  • Raḿrez, I; Lecumberry, F; Sapiro, G

Published Date

  • 2009

Published In

  • CAMSAP 2009 - 2009 3rd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing

Start / End Page

  • 197 - 200

Digital Object Identifier (DOI)

  • 10.1109/CAMSAP.2009.5413302