In recent years, face recognition is being recognized as a cost-sensitive learning problem. Many cost-sensitive classifiers have been proposed. However, no sufficient attention is paid to the research on cost-sensitive dimensionality reduction, especially on the cost-sensitive semisupervised dimensionality reduction. To the best of our knowledge, cost sensitive semisupervised discriminant analysis (CS3DA) may be the first work. CS3DA first uses the sparse representation to infer a soft label for unlabeled sample and then learns the projection direction by incorporating misclassification costs into both labeled and unlabeled data. Although CS3DA reduces the loss of misclassification, it has two major drawbacks: 1) the sparsity is not a feature of face recognition, and therefore sparse approximations may not deliver the robustness or performance desired and 2) CS3DA is not proven to satisfy the minimal misclassification loss criterion. In this paper, we embed pairwise costs in semisupervised discriminant analysis (PCSDA) for face recognition. PCSDA first uses a simple \(l_{2}\) approach to predict the label of unlabeled data, and then learns the projection direction by embedding pairwise costs in both labeled and unlabeled data. Compared with CS3DA, PCSDA has three major advantages: 1) \(l_{2}\) approach is more accurate and robust than sparse representation for face recognition; 2) we prove that CS3DA approximates the pairwise Bayesian risk only when the classes are balanced and without outliers in face data sets; and 3) PCSDA approximates the pairwise Bayesian risk considering the class imbalance problem and outliers in face recognition. Hence, the projection direction obtained by using PCSDA can be more discriminative, immunes to outliers and class imbalance problem. The experimental results on AR, PIE, ORL, and extended Yale B data sets demonstrate the effectiveness of PCSDA.