Abstract

High-dimensional data present in the real world is often corrupted by noise and gross outliers. Principal component analysis (PCA) fails to learn the true low-dimensional subspace in such cases. This is the reason why robust versions of PCA, which put a penalty on arbitrarily large outlying entries, are preferred to perform dimension reduction. In this paper, we argue that it is necessary to study the presence of outliers not only in the observed data matrix but also in the orthogonal complement subspace of the authentic principal subspace. In fact, the latter can seriously skew the estimation of the principal components. A reinforced robustification of principal component pursuit is designed in order to cater to the problem of finding out both types of outliers and eliminate their influence on the final subspace estimation. Simulation results under different design situations clearly show the superiority of our proposed method as compared with other popular implementations of robust PCA. This paper also showcases possible applications of our method in critically tough scenarios of face recognition and video background subtraction. Along with approximating a usable low-dimensional subspace from real-world data sets, the technique can capture semantically meaningful outliers.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.