Abstract

For image recognition, an extensive number of subspace-learning methods have been proposed to overcome the high-dimensionality problem of the features being used. In this paper, we first give an overview of the most popular and state-of-the-art subspace-learning methods, and then, a novel manifold-learning method, named soft locality preserving map (SLPM), is presented. SLPM aims to control the level of spread of the different classes, which is closely connected to the generalizability of the learned subspace. We also do an overview of the extension of manifold learning methods to deep learning by formulating the loss functions for training, and further reformulate SLPM into a soft locality preserving (SLP) loss. These loss functions are applied as an additional regularization to the learning of deep neural networks. We evaluate these subspace-learning methods, as well as their deep-learning extensions, on facial expression recognition. Experiments on four commonly used databases show that SLPM effectively reduces the dimensionality of the feature vectors and enhances the discriminative power of the extracted features. Moreover, experimental results also demonstrate that the learned deep features regularized by SLP acquire a better discriminability and generalizability for facial expression recognition.

Highlights

  • Dimensionality reduction, which aims to find the distinctive features to represent high-dimensional data in a low-dimensional subspace, is a fundamental problem in classification

  • We describe the extension of Locality preserving projection (LPP) to deep learning, and formulate the proposed soft locality preserving map (SLPM) algorithm for deep learning as well, so more discriminative deep features of facial expressions can be learned

  • locality sensitive discriminant analysis (LSDA), locality-preserved maximum information projection (LPMIP), and our proposed SLMP define their objective functions as the difference between the intrinsic and the penalty-graph matrices, while maximum margin criterion (MMC) and soft discriminant map (SDM) use the difference between the inter-class and the intra-class scatter matrices

Read more

Summary

INTRODUCTION

Dimensionality reduction, which aims to find the distinctive features to represent high-dimensional data in a low-dimensional subspace, is a fundamental problem in classification. Linear methods, such as PCA, LDA, and SDM, may fail to find the underlying nonlinear structure of the data under consideration, and they may lose some discriminant information of the manifolds during the linear projection To overcome this problem, some nonlinear dimensionality reduction techniques have been proposed. Popular nonlinear manifold-learning methods include ISOMAP [8], locally linear embedding (LLE) [9], and Laplacian eigenmaps [10], which can be considered as special cases of the general framework for dimensionality reduction named “graph embedding” [11] These methods can represent the local structure of the data, they suffer from the out-of-sample problem.

AN OVERVIEW OF SUBSPACE LEARNING
Methods
SOFT LOCALITY PRESERVING MAP
FEATURE DESCRIPTORS AND GENERATION
Extract features from face images
EXPERIMENTAL SET-UP AND RESULTS
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call