On the global convergence of randomized coordinate gradient descent for non-convex optimization
Publication
, Journal Article
Chen, Z; Li, Y; Lu, J
January 4, 2021
In this work, we analyze the global convergence property of coordinate gradient descent with random choice of coordinates and stepsizes for non-convex optimization problems. Under generic assumptions, we prove that the algorithm iterate will almost surely escape strict saddle points of the objective function. As a result, the algorithm is guaranteed to converge to local minima if all saddle points are strict. Our proof is based on viewing coordinate descent algorithm as a nonlinear random dynamical system and a quantitative finite block analysis of its linearization around saddle points.
Duke Scholars
Publication Date
January 4, 2021
Citation
APA
Chicago
ICMJE
MLA
NLM
Chen, Z., Li, Y., & Lu, J. (2021). On the global convergence of randomized coordinate gradient descent for
non-convex optimization.
Chen, Ziang, Yingzhou Li, and Jianfeng Lu. “On the global convergence of randomized coordinate gradient descent for
non-convex optimization,” January 4, 2021.
Chen Z, Li Y, Lu J. On the global convergence of randomized coordinate gradient descent for
non-convex optimization. 2021 Jan 4;
Chen, Ziang, et al. On the global convergence of randomized coordinate gradient descent for
non-convex optimization. Jan. 2021.
Chen Z, Li Y, Lu J. On the global convergence of randomized coordinate gradient descent for
non-convex optimization. 2021 Jan 4;
Publication Date
January 4, 2021