Supervised Sparsity Preserving Projections for Face Recognition
Keywords:Feature extraction, sparse representation, manifold learning, Laplacian discriminant function
AbstractRecently feature extraction methods have commonly been used as a principled approach to understand the intrinsic structure hidden in high-dimensional data. In this paper, a novel supervised learning method, called Supervised Sparsity Preserving Projections (SSPP), is proposed. SSPP attempts to preserve the sparse representation structure of the data when identifying an efficient discriminant subspace. First, SSPP creates a concatenated dictionary by class-wise PCA decompositions and learns the sparse representation structure of each sample under the constructed dictionary using the least squares method. Second, by maximizing the ratio of non-local scatter to local scatter, a Laplacian discriminant function is defined to characterize the separability of the samples in the different sub-manifolds. Then, to achieve improved recognition results, SSPP integrates the learned sparse representation structure as a regular term into the Laplacian discriminant function. Finally, the proposed method is converted into a generalized eigenvalue problem. The extensive and promising experimental results on several popular face databases validate the feasibility and effectiveness of the proposed approach.
Download data is not yet available.
How to Cite
Ren, Y., Chen, Y., & Yue, X. (2017). Supervised Sparsity Preserving Projections for Face Recognition. COMPUTING AND INFORMATICS, 36(4), 815–836. Retrieved from https://www.cai.sk/ojs/index.php/cai/article/view/2017_4_815