Abstract

Principal component analysis is one of the most commonly used methods for dimensionality reduction in signal processing. However, the most commonly used PCA formulation is based on the $L_2$ -norm, which can be highly influenced by outlier data. In recent years, there has been growing interest in the development of more robust PCA methods. Recent works explore alternative norms, such as the $L_1$ -norm or the more general $L_p$ -norms, which significantly improve robustness over the $L_2$ -norm. In this work, we present the Grassmann Iterative P-norm PCA (GrIP-PCA) method, which uses an iterative Grassmann manifold optimization approach to find the solution to the highly non-convex $L_p$ -norm PCA problem. The increased flexibility of this iterative optimization approach allows for the first ever direct comparison between the projection maximization and reprojection minimization objective functions for general $L_p$ -PCA. Our results demonstrate that the underutilized reprojection formulation leads to improved robustness of PCA in multiple experiments.

Highlights

  • Principal Component Analysis (PCA) is a popular tool for dimensionality reduction and data representation in the fields of signal processing and pattern recognition

  • The traditional L2-PCA is highly influenced by outliers in the training set [1], while L1-PCA and fractional p-norms are more robust to outliers

  • We propose a method for Lp-PCA linear dimensionality reduction which combines Grassmann Manifold (GM) optimization [4] with a deep learning framework

Read more

Summary

INTRODUCTION

Principal Component Analysis (PCA) is a popular tool for dimensionality reduction and data representation in the fields of signal processing and pattern recognition. The GM contains all possible orthogonal projections of the data, and it is guaranteed that the optimal PCA solution lies on the GM. With this formulation, finding the solution for PCA becomes an optimization problem on the GM. Component Analysis (GrIP-PCA) addresses the challenging problem of fractional p-norms for Lp-PCA, as well as L1-. The main contributions of this worrkAarpepaliscafotilolonwos:f a two-step iterative GM optimization method to the solution of the general Lp-norm PCA r problem. Our experiments successfully demonstrate improved robustness of PCA methods through the use of fractional p-norms.

BACKGROUND
METHODOLOGY
Result
RESULTS
SYNTHETIC DATASET EXPERIMENTS
LABELED FACES IN THE WILD EXPERIMENTS
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call