Abstract

Various methods for feature extraction and dimensionality reduction have been proposed in recent decades, including supervised and unsupervised methods and linear and nonlinear methods. Despite the different motivations of these methods, we present in this paper a general formulation known as factor analysis to unify them within a common framework. During factor analysis, an object can be seen as being comprised of content and style factors, and the objective of feature extraction and dimensionality reduction is to obtain the content factor without style factor. There are two vital steps in factor analysis framework; one is the design of factor separating objective function, including the design of partition and weight matrix, and the other is the design of space mapping function. In this paper, classical Linear Discriminant Analysis (LDA) and Locality Preserving Projection (LPP) algorithms are improved based on factor analysis framework, and LDA based on factor analysis (FA-LDA) and LPP based on factor analysis (FA-LPP) are proposed. Experimental results show the superiority of our proposed approach in classification performance compared to classical LDA and LPP algorithms.

Highlights

  • Feature extraction and selection are both a critical and challenging component of pattern recognition [1, 2]

  • We present a general framework called factors analysis, along with its linearization, kernelization, and tensorization, which offers a unified view for understanding and explaining many of the popular dimensionality reduction algorithms such as the ones mentioned above

  • We aim to provide insights into the relationship among the state-of-the-art dimensionality reduction algorithms as well as to facilitate the design of new algorithms

Read more

Summary

Introduction

Feature extraction and selection are both a critical and challenging component of pattern recognition [1, 2]. The utilization of algorithms for dimensionality reduction in supervised or unsupervised learning tasks has attracted a lot of attention in recent years. The relative simplicity and effectiveness of the linear algorithms for both Principal Component Analysis (PCA) [3] and Linear Discriminant Analysis (LDA) [3, 4] have made these two algorithms very attractive. Three other recently developed algorithms— termed ISOMAP [7], LLE [8], and Laplacian Eigenmap [9]— can be utilized when conducting nonlinear dimensionality on a dataset that lies on or around a lower-dimensional manifold. In order to extend the linear dimensionality reduction algorithms to nonlinear ones, the kernel trick approach [10] has been applied to perform linear operations on higher or infinite dimensional features which are transformed by a kernel mapping function. A number of algorithms [11,12,13,14] have recently been proposed to carry out dimensionality reduction on objects encoded as matrices or tensors of an arbitrary order

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call