Abstract

Dimensionality reduction plays a pivotal role in preparing high-dimensional data for classification and discrimination tasks by eliminating redundant features and enhancing the efficiency of classifiers. The effectiveness of a dimensionality reduction algorithm hinges on its numerical stability. When data projections are numerically stable, they lead to enhanced class separability in the lower-dimensional embedding, consequently yielding higher classification accuracy. This paper investigates the numerical attributes of dimensionality reduction and discriminant subspace learning, with a specific focus on Locality-Preserving Partial Least Squares Discriminant Analysis (LPPLS-DA). High-dimensional data frequently introduce singularity in the scatter matrices, posing a significant challenge. To tackle this issue, the paper explores two robust implementations of LPPLS-DA. These approaches not only optimize data projections but also capture more discriminative features, resulting in a marked improvement in classification accuracy. Empirical evidence supports these findings through numerical experiments conducted on synthetic and spectral datasets. The results demonstrate the superior performance of the proposed methods when compared to several state-of-the-art dimensionality reduction techniques in terms of both classification accuracy and dimension reduction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call