Abstract

We derive and discuss two new algorithms for principal component analysis (PCA) that are shown to converge faster than the traditional PCA algorithms due to Oja (1985) and Sanger (1989). It is well known that the traditional PCA algorithms, which are derived by using the gradient ascent technique on an objective function, are slow to converge. Furthermore, the convergence of these algorithms depends on the appropriate selection of the gain sequences. Since online applications demand faster convergence and an adaptive choice of the gains, we present new algorithms to solve these problems. We first present a new unconstrained objective function which can be maximized to obtain the PCA components. Adaptive algorithms are derived from this objective function by the use of the (1) gradient ascent, (2) conjugate direction, and the (3) Newton-Rhapson methods of optimization. Although the gradient ascent technique results in the well-known Xu (1993) algorithm, the conjugate direction and Newton-Rhapson methods produce two new algorithms for PCA. Extensive experiments on synthetic Gaussian and real-world signal data show the faster convergence of the new algorithms over the traditional methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.