Skip to main content

On Nonconvex Optimization for Machine Learning

Publication ,  Journal Article
Jin, C; Netrapalli, P; Ge, R; Kakade, SM; Jordan, MI
Published in: Journal of the ACM
March 1, 2021

Gradient descent (GD) and stochastic gradient descent (SGD) are the workhorses of large-scale machine learning. While classical theory focused on analyzing the performance of these methods in convex optimization problems, the most notable successes in machine learning have involved nonconvex optimization, and a gap has arisen between theory and practice. Indeed, traditional analyses of GD and SGD show that both algorithms converge to stationary points efficiently. But these analyses do not take into account the possibility of converging to saddle points. More recent theory has shown that GD and SGD can avoid saddle points, but the dependence on dimension in these analyses is polynomial. For modern machine learning, where the dimension can be in the millions, such dependence would be catastrophic. We analyze perturbed versions of GD and SGD and show that they are truly efficient-their dimension dependence is only polylogarithmic. Indeed, these algorithms converge to second-order stationary points in essentially the same time as they take to converge to classical first-order stationary points.

Duke Scholars

Published In

Journal of the ACM

DOI

EISSN

1557-735X

ISSN

0004-5411

Publication Date

March 1, 2021

Volume

68

Issue

2

Related Subject Headings

  • Computation Theory & Mathematics
  • 46 Information and computing sciences
  • 08 Information and Computing Sciences
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Jin, C., Netrapalli, P., Ge, R., Kakade, S. M., & Jordan, M. I. (2021). On Nonconvex Optimization for Machine Learning. Journal of the ACM, 68(2). https://doi.org/10.1145/3418526
Jin, C., P. Netrapalli, R. Ge, S. M. Kakade, and M. I. Jordan. “On Nonconvex Optimization for Machine Learning.” Journal of the ACM 68, no. 2 (March 1, 2021). https://doi.org/10.1145/3418526.
Jin C, Netrapalli P, Ge R, Kakade SM, Jordan MI. On Nonconvex Optimization for Machine Learning. Journal of the ACM. 2021 Mar 1;68(2).
Jin, C., et al. “On Nonconvex Optimization for Machine Learning.” Journal of the ACM, vol. 68, no. 2, Mar. 2021. Scopus, doi:10.1145/3418526.
Jin C, Netrapalli P, Ge R, Kakade SM, Jordan MI. On Nonconvex Optimization for Machine Learning. Journal of the ACM. 2021 Mar 1;68(2).

Published In

Journal of the ACM

DOI

EISSN

1557-735X

ISSN

0004-5411

Publication Date

March 1, 2021

Volume

68

Issue

2

Related Subject Headings

  • Computation Theory & Mathematics
  • 46 Information and computing sciences
  • 08 Information and Computing Sciences