Linear discriminant analysis (LDA) stands as a widely used supervised feature extraction technique that maps data onto a low-dimensional subspace such that the between-class scatter is maximized while within-class scatter is minimized. Despite its utility, LDA faces many challenges, particularly when the within-class scatter matrix becomes singular due to small sample size. Other dimensionality reduction techniques, such as Neighbourhood Component Analysis (NCA), have been proposed as alternatives to LDA. NCA learns a linear transformation that maximizes the likelihood that datapoints of the same class are clustered together in the lower-dimensional space. However, the optimization of NCA relies on a non-convex cost function, making it prone to local minima. To address the challenges faced by both LDA and NCA, we propose a novel dimensionality reduction method named neighborhood discriminant analysis (NDA). Like NCA, NDA learns a linear transformation that aims to cluster datapoints based on class label. However, NDA is framed as an eigendecomposition problem, eliminating the need for non-convex optimization. We demonstrate the new approach on real small target sonar data. [Work funded by the ONR grant numbers N000142112420 and N000142312503, and DoD Navy (NEEC) Grant No. N001742010016.]
Read full abstract