Skip to main content

No penalty no tears: Least squares in high-dimensional linear models

Publication ,  Conference
Wang, X; Dunson, D; Leng, C
Published in: 33rd International Conference on Machine Learning Icml 2016
January 1, 2016

Ordinary least squares (OI,S) is the default method for fitting linear models, but is not applicable for problems with dimensionality larger than the sample size. For these problems, we advocate the use of a generalized version of OLS motivated by ridge regression, and propose two novel three-step algorithms involving least squares fitting and hard thresholding. The algorithms are methodologically simple to understand intuitively, computationally easy to implement efficiently, and theoretically appealing for choosing models consistently. Numerical exercises comparing our methods with penalization-based approaches in simulations and data analyses illustrate the great potential of the proposed algorithms.

Duke Scholars

Published In

33rd International Conference on Machine Learning Icml 2016

Publication Date

January 1, 2016

Volume

4

Start / End Page

2685 / 2706
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Wang, X., Dunson, D., & Leng, C. (2016). No penalty no tears: Least squares in high-dimensional linear models. In 33rd International Conference on Machine Learning Icml 2016 (Vol. 4, pp. 2685–2706).
Wang, X., D. Dunson, and C. Leng. “No penalty no tears: Least squares in high-dimensional linear models.” In 33rd International Conference on Machine Learning Icml 2016, 4:2685–2706, 2016.
Wang X, Dunson D, Leng C. No penalty no tears: Least squares in high-dimensional linear models. In: 33rd International Conference on Machine Learning Icml 2016. 2016. p. 2685–706.
Wang, X., et al. “No penalty no tears: Least squares in high-dimensional linear models.” 33rd International Conference on Machine Learning Icml 2016, vol. 4, 2016, pp. 2685–706.
Wang X, Dunson D, Leng C. No penalty no tears: Least squares in high-dimensional linear models. 33rd International Conference on Machine Learning Icml 2016. 2016. p. 2685–2706.

Published In

33rd International Conference on Machine Learning Icml 2016

Publication Date

January 1, 2016

Volume

4

Start / End Page

2685 / 2706