Bayesian and L 1 approaches for sparse unsupervised learning


Journal Article

The use of L 1 regularisation for sparse learning has generated immense research interest, with many successful applications in diverse areas such as signal acquisition, image coding, genomics and collaborative filtering. While existing work highlights the many advantages of L 1 methods, in this paper we find that L 1 regularisation often dramatically under-performs in terms of predictive performance when compared to other methods for inferring sparsity. We focus on unsupervised latent variable models, and develop L 1 minimising factor models, Bayesian variants of "L 1", and Bayesian models with a stronger L 0-like sparsity induced through spike-and-slab distributions. These spike-and-slab Bayesian factor models encourage sparsity while accounting for uncertainty in a principled manner, and avoid unnecessary shrinkage of non-zero values. We demonstrate on a number of data sets that in practice spike-and-slab Bayesian methods outperform L 1 minimisation, even on a computational budget. We thus highlight the need to re-assess the wide use of L 1 methods in sparsity-reliant applications, particularly when we care about generalising to previously unseen data, and provide an alternative that, over many varying conditions, provides improved generalisation performance. Copyright 2012 by the author(s)/owner(s).

Duke Authors

Cited Authors

  • Mohamed, S; Heller, KA; Ghahramani, Z

Published Date

  • October 10, 2012

Published In

  • Proceedings of the 29th International Conference on Machine Learning, Icml 2012

Volume / Issue

  • 1 /

Start / End Page

  • 751 - 758

Citation Source

  • Scopus