Abstract

Dimensionality reduction is one of the preprocessing steps in many machine learning applications and it is used to transform the features into a lower dimension space. Principal component analysis (PCA) technique is one of the most famous unsupervised dimensionality reduction techniques. The goal of the technique is to find the PCA space, which represents the direction of the maximum variance of the given data. This paper highlights the basic background needed to understand and implement the PCA technique. This paper starts with basic definitions of the PCA technique and the algorithms of two methods of calculating PCA, namely, the covariance matrix and singular value decomposition (SVD) methods. Moreover, a number of numerical examples are illustrated to show how the PCA space is calculated in easy steps. Three experiments are conducted to show how to apply PCA in the real applications including biometrics, image compression, and visualisation of high-dimensional datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call