Skip to main content

New algorithms for learning incoherent and overcomplete dictionaries

Publication ,  Journal Article
Arora, S; Ge, R; Moitra, A
Published in: Journal of Machine Learning Research
January 1, 2014

In sparse recovery we are given a matrix A ∈ Rnxm ("the dictionary") and a vector of the form AX where X is sparse, and the goal is to recover X. This is a central notion in signal processing, statistics and machine learning. But in applications such as sparse coding, edge detection, compression and super resolution, the dictionary A is unknown and has to be learned from random examples of the form Y = AX where X is drawn from an appropriate distribution - this is the dictionary learning problem. In most settings, A is over complete: it has more columns than rows. This paper presents a polynomial-time algorithm for learning overcomplete dictionaries; the only previously known algorithm with provable guarantees is the recent work of Spielman et al. (2012) who gave an algorithm for the undercomplete case, which is rarely the case in applications. Our algorithm applies to incoherent dictionaries which have been a central object of study since they were introduced in seminal work of Donoho and Huo (1999). In particular, a dictionary is μ-incoherent if each pair of columns has inner product at most μ/√n. The algorithm makes natural stochastic assumptions about the unknown sparse vector X, which can contain k ≤ cmin(√n/μlogn, m1/2-η) non-zero entries (for any η > 0). This is close to the best k allowable by the best sparse recovery algorithms even if one knows the dictionary A exactly. Moreover, both the running time and sample complexity depend on log 1/∈, where e is the target accuracy, and so our algorithms converge very quickly to the true dictionary. Our algorithm can also tolerate substantial amounts of noise provided it is incoherent with respect to the dictionary (e.g., Gaussian). In the noisy setting, our running time and sample complexity depend polynomially on 1/∈, and this is necessary.

Duke Scholars

Published In

Journal of Machine Learning Research

EISSN

1533-7928

ISSN

1532-4435

Publication Date

January 1, 2014

Volume

35

Start / End Page

779 / 806

Related Subject Headings

  • Artificial Intelligence & Image Processing
  • 4905 Statistics
  • 4611 Machine learning
  • 17 Psychology and Cognitive Sciences
  • 08 Information and Computing Sciences
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Arora, S., Ge, R., & Moitra, A. (2014). New algorithms for learning incoherent and overcomplete dictionaries. Journal of Machine Learning Research, 35, 779–806.
Arora, S., R. Ge, and A. Moitra. “New algorithms for learning incoherent and overcomplete dictionaries.” Journal of Machine Learning Research 35 (January 1, 2014): 779–806.
Arora S, Ge R, Moitra A. New algorithms for learning incoherent and overcomplete dictionaries. Journal of Machine Learning Research. 2014 Jan 1;35:779–806.
Arora, S., et al. “New algorithms for learning incoherent and overcomplete dictionaries.” Journal of Machine Learning Research, vol. 35, Jan. 2014, pp. 779–806.
Arora S, Ge R, Moitra A. New algorithms for learning incoherent and overcomplete dictionaries. Journal of Machine Learning Research. 2014 Jan 1;35:779–806.

Published In

Journal of Machine Learning Research

EISSN

1533-7928

ISSN

1532-4435

Publication Date

January 1, 2014

Volume

35

Start / End Page

779 / 806

Related Subject Headings

  • Artificial Intelligence & Image Processing
  • 4905 Statistics
  • 4611 Machine learning
  • 17 Psychology and Cognitive Sciences
  • 08 Information and Computing Sciences