We consider the problem of sparse subspace learning for data classification and face recognition. New approaches called lα-regularization-based sparse locality preserving projection (α-SLPP) and structural sparse locality preserving projection (SSLPP) are proposed by incorporating theories of sparse representation and structural sparse regularization into spectral embedding. The proposed methods can efficiently exploit the local geometric information of the data. Also, by inducing sparsity, they facilitate the interpretation of the projection results and the detection of more discriminating features for classification and recognition. In addition, α-SLPP induces sparsity by using non-convex lα-norm regularizer, which is much closer to l0-norm. SSLPP derives a more organized sparse pattern through structural sparse regularization, and thus overcomes the problem that merely decreasing the cardinality may not be enough in certain situations. We formulate the sparse subspace learning problem as feasible optimization problems and present efficient methods to solve them. Experiments in data classification, face recognition, and pixel-corrupted face recognition are carried out to verify the feasibility and effectiveness of the proposed approaches.