Sparse learning for support vector classification
This paper provides a sparse learning algorithm for Support Vector Classification (SVC), called Sparse Support Vector Classification (SSVC), which leads to sparse solutions by automatically setting the irrelevant parameters exactly to zero. SSVC adopts the L0-norm regularization term and is trained by an iteratively reweighted learning algorithm. We show that the proposed novel approach contains a hierarchical-Bayes interpretation. Moreover, this model can build up close connections with some other sparse models. More specifically, one variation of the proposed method is equivalent to the zero-norm classifier proposed in (Weston et al., 2003); it is also an extended and more flexible framework in parallel with the Sparse Probit Classifier proposed by Figueiredo (2003). Theoretical justifications and experimental evaluations on two synthetic datasets and seven benchmark datasets show that SSVC offers competitive performance to SVC but needs significantly fewer Support Vectors. © 2010 Elsevier B.V. All rights reserved.
Duke Scholars
Published In
DOI
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Artificial Intelligence & Image Processing
- 46 Information and computing sciences
- 1702 Cognitive Sciences
- 0906 Electrical and Electronic Engineering
- 0801 Artificial Intelligence and Image Processing
Citation
Published In
DOI
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Artificial Intelligence & Image Processing
- 46 Information and computing sciences
- 1702 Cognitive Sciences
- 0906 Electrical and Electronic Engineering
- 0801 Artificial Intelligence and Image Processing