Escaping from saddle points: Online stochastic gradient for tensor decomposition

Published

Conference Paper

© 2015 A. Agarwal & S. Agarwal. We analyze stochastic gradient descent for optimizing non-convex functions. In many cases for non-convex functions the goal is to find a reasonable local minimum, and the main concern is that gradient updates are trapped in saddle points. In this paper we identify strict saddle property for non-convex problem that allows for efficient optimization. Using this property we show that from an arbitrary starting point, stochastic gradient descent converges to a local minimum in a polynomial number of iterations. To the best of our knowledge this is the first work that gives global convergence guarantees for stochastic gradient descent on non-convex functions with exponentially many local minima and saddle points. Our analysis can be applied to orthogonal tensor decomposition, which is widely used in learning a rich class of latent variable models. We propose a new optimization formulation for the tensor decomposition problem that has strict saddle property. As a result we get the first online algorithm for orthogonal tensor decomposition with global convergence guarantee.

Duke Authors

Cited Authors

  • Ge, R; Huang, F; Jin, C; Yuan, Y

Published Date

  • January 1, 2015

Published In

Volume / Issue

  • 40 / 2015

Electronic International Standard Serial Number (EISSN)

  • 1533-7928

International Standard Serial Number (ISSN)

  • 1532-4435

Citation Source

  • Scopus