Linear subspace learning methods such as Fisher׳s Linear Discriminant Analysis (LDA), Unsupervised Discriminant Projection (UDP), and Locality Preserving Projections (LPP) have been widely used in face recognition applications as a tool to capture low dimensional discriminant information. However, when these methods are applied in the context of face recognition, they often encounter the small-sample-size problem. In order to overcome this problem, a separate Principal Component Analysis (PCA) step is usually adopted to reduce the dimensionality of the data. However, such a step may discard dimensions that contain important discriminative information that can aid classification performance. In this work, we propose a new idea which we named Multi-class Fukunaga Koontz Discriminant Analysis (FKDA) by incorporating the Fukunaga Koontz Transform within the optimization for maximizing class separation criteria in LDA, UDP, and LPP. In contrast to traditional LDA, UDP, and LPP, our approach can work with very high dimensional data as input, without requiring a separate dimensionality reduction step to make the scatter matrices full rank. In addition, the FKDA formulation seeks optimal projection direction vectors that are orthogonal which the existing methods cannot guarantee, and it has the capability of finding the exact solutions to the “trace ratio” objective in discriminant analysis problems while traditional methods can only deal with a relaxed and inexact “ratio trace” objective. We have shown using six face database, in the context of large scale unconstrained face recognition, face recognition with occlusions, and illumination invariant face recognition, under “closed set”, “semi-open set”, and “open set” recognition scenarios, that our proposed FKDA significantly outperforms traditional linear discriminant subspace learning methods as well as five other competing algorithms.