Abstract
Linear Discriminant Analysis (LDA) is one of the most successful supervised dimensionality reduction methods and has been widely used in many real-world applications. However, l2-norm is employed as the distance metric in the objective of LDA, which is sensitive to outliers. Many previous works improve the robustness of LDA by using l1-norm distance. However, the robustness against outliers is limited and the solver of l1-norm is mostly based on the greedy search strategy, which is time-consuming and easy to get stuck in a local optimum. In this paper, we propose a novel robust LDA measured by l2,1-norm to learn robust discriminative projections. The proposed model is challenging to solve since it needs to minimize and maximize (minmax) l2,1-norm terms simultaneously. As a result, we first systematically derive an efficient iterative optimization algorithm to solve a general ratio minimization problem, and then rigorously prove its convergence. More importantly, an alternately non-greedy iterative re-weighted optimization algorithm is developed based on the preceding approach for solving proposed l2,1-norm minmax problem. Besides, an optimal weighted mean mechanism is driven according to the designed objective and solver, which can be applied to other approaches for robustness improvement. Experimental results on several real-world datasets show the effectiveness of proposed method.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Pattern Analysis and Machine Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.