Face recognition is the most efficient image analysis application, and the reduction of dimensionality is an essential requirement. The curse of dimensionality occurs with the increase in dimensionality, the sample density decreases exponentially. Dimensionality Reduction is the process of taking into account the dimensionality of the feature space by obtaining a set of principal features. The purpose of this manuscript is to demonstrate a comparative study of Principal Component Analysis and Linear Discriminant Analysis methods which are two of the highly popular appearance-based face recognition projection methods. PCA creates a flat dimensional data representation that describes as much data variance as possible, while LDA finds the vectors that best discriminate between classes in the underlying space. The main idea of PCA is to transform high dimensional input space into the function space that displays the maximum variance. Traditional LDA feature selection is obtained by maximizing class differences and minimizing class distance.