Stochastic nested variance reduction for nonconvex optimization
We study finite-sum nonconvex optimization problems, where the objective function is an average of n nonconvex functions. We propose a new stochastic gradient descent algorithm based on nested variance reduction. Compared with conventional stochastic variance reduced gradient (SVRG) algorithm that uses two reference points to construct a semi-stochastic gradient with diminishing variance in each iteration, our algorithm uses K + 1 nested reference points to build a semi-stochastic gradient to further reduce its variance in each iteration. For smooth nonconvex functions, the proposed algorithm converges to an ε-approximate first-order stationary point (i.e., ||∇(x)k2 ≤ ε) within Oe(n ^ ε-2 + ε-3 ^ n1/2ε-2)1 number of stochastic gradient evaluations. This improves the best known gradient complexity of SVRG O(n + n2/3ε-2) and that of SCSG O(n ^ ε-2 + ε-10/3 ^ n2/3ε-2). For gradient dominated functions, our algorithm also achieves better gradient complexity than the state-of-the-art algorithms. Thorough experimental results on different nonconvex optimization problems back up our theory.
Duke Scholars
Published In
ISSN
Publication Date
Volume
Start / End Page
Related Subject Headings
- 4611 Machine learning
- 1702 Cognitive Sciences
- 1701 Psychology
Citation
Published In
ISSN
Publication Date
Volume
Start / End Page
Related Subject Headings
- 4611 Machine learning
- 1702 Cognitive Sciences
- 1701 Psychology