Un-regularizing: Approximate proximal point and faster stochastic algorithms for empirical risk minimization

Conference Paper

We develop a family of accelerated stochastic algorithms that optimize sums of convex functions. Our algorithms improve upon the fastest running time for empirical risk minimization (ERM), and in particular linear least-squares regression, across a wide range of problem settings. To achieve this, we establish a framework, based on the classical proximal point algorithm, useful for accelerating recent fast stochastic algorithms in a black-box fashion. Empirically, we demonstrate that the resulting algorithms exhibit notions of stability that are advantageous in practice. Both in theory and in practice, the provided algorithms reap the computational benefits of adding a large strongly convex regularization term, without incurring a corresponding bias to the original ERM problem.

Duke Authors

Cited Authors

  • Frostig, R; Ge, R; Kakade, SM; Sidford, A

Published Date

  • January 1, 2015

Published In

  • 32nd International Conference on Machine Learning, Icml 2015

Volume / Issue

  • 3 /

Start / End Page

  • 2530 - 2538

International Standard Book Number 13 (ISBN-13)

  • 9781510810587

Citation Source

  • Scopus