PROJECT TITLE :
Discrimination on the Grassmann Manifold: Fundamental Limits of Subspace Classifiers
We tend to derive elementary limits on the reliable classification of linear and affine subspaces from noisy, linear features. Drawing an analogy between discrimination among subspaces and communication over vector wireless channels, we have a tendency to outline 2 Shannon-impressed characterizations of asymptotic classifier performance. First, we outline the classification capacity, that characterizes the mandatory and sufficient conditions for vanishing misclassification likelihood as the signal dimension, the quantity of features, and the quantity of subspaces to be discriminated all approach infinity. Second, we outline the variety-discrimination tradeoff, which, by analogy with the diversity-multiplexing tradeoff of fading vector channels, characterizes relationships between the quantity of discernible subspaces and also the misclassification probability as the feature noise power approaches zero. We have a tendency to derive higher and lower bounds on these quantities that are tight in many regimes. Numerical results, including a face recognition application, validate the ends up in practice.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here