Discrimination on the Grassmann Manifold: Fundamental Limits of Subspace Classifiers
We derive fundamental limits on the reliable classification of linear and affine subspaces from noisy, linear features. Drawing an analogy between discrimination among subspaces and communication over vector wireless channels, we define two Shannon-inspired characterizations of asymptotic classifier performance. First, we define the classification capacity, which characterizes the necessary and sufficient conditions for vanishing misclassification probability as the signal dimension, the number of features, and the number of subspaces to be discriminated all approach infinity. Second, we define the diversity-discrimination tradeoff, which, by analogy with the diversity-multiplexing tradeoff of fading vector channels, characterizes relationships between the number of discernible subspaces and the misclassification probability as the feature noise power approaches zero. We derive upper and lower bounds on these quantities which are tight in many regimes. Numerical results, including a face recognition application, validate the results in practice.
Duke Scholars
Published In
DOI
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Networking & Telecommunications
- 4613 Theory of computation
- 4006 Communications engineering
- 1005 Communications Technologies
- 0906 Electrical and Electronic Engineering
- 0801 Artificial Intelligence and Image Processing
Citation
Published In
DOI
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Networking & Telecommunications
- 4613 Theory of computation
- 4006 Communications engineering
- 1005 Communications Technologies
- 0906 Electrical and Electronic Engineering
- 0801 Artificial Intelligence and Image Processing