Abstract

In classical statistical pattern recognition tasks, we usually represent data samples with ndimensional vectors, i.e. data is vectorized to form data vectors before applying any technique. However in many real applications, the dimension of those 1D data vectors is very high, leading to the “curse of dimensionality“. The curse of dimensionality is a significant obstacle in pattern recognition and machine learning problems that involve learning from few data samples in a high-dimensional feature space. In face recognition, Principal component analysis (PCA) and Linear discriminant analysis (LDA) are the most popular subspace analysis approaches to learn the low-dimensional structure of high dimensional data. But PCA and LDA are based on 1D vectors transformed from image matrices, leading to lose structure information and make the evaluation of the covariance matrices high cost. In this chapter, straightforward image projection techniques are introduced for image feature extraction. As opposed to conventional PCA and LDA, the matrix-based subspace analysis is based on 2D matrices rather than 1D vectors. That is, the image matrix does not need to be previously transformed into a vector. Instead, an image covariance matrix can be constructed directly using the original image matrices. We use the terms “matrix-based“ and “image-based“ subspace analysis interchangeably in this chapter. In contrast to the covariance matrix of PCA and LDA, the size of the image covariance matrix using image-based approaches is much smaller. As a result, it has two important advantages over traditional PCA and LDA. First, it is easier to evaluate the covariance matrix accurately. Second, less time is required to determine the corresponding eigenvectors (Jian Yang et al., 2004). A brief of history of image-based subspace analysis can be summarized as follow. Based on PCA, some image-based subspace analysis approaches have been developed such as 2DPCA (Jian Yang et al., 2004), GLRAM (Jieping Ye, 2004), Non-iterative GLRAM (Jun Liu & Songcan Chen 2006; Zhizheng Liang et al., 2007), MatPCA (Songcan Chen, et al. 2005), 2DSVD (Chris Ding & Jieping Ye 2005), Concurrent subspace analysis (D.Xu, et al. 2005) and so on. Based on LDA, 2DLDA (Ming Li & Baozong Yuan 2004), MatFLDA (Songcan Chen, et al. 2005), Iterative 2DLDA (Jieping Ye, et al. 2004), Noniterative 2DLDA (Inoue, K. & Urahama, K. 2006) have been developed until date. The main purpose of this chapter is to give you a generalized overview of those matrix-based approaches with detailed mathematical theory behind that. All algorithms presented here are up-to-date till Jan. 2007.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.