Abstract

Manifold learning is an effective dimensional reduction technique for face feature extraction, which, generally speaking, tends to preserve the local neighborhood structures of given samples. However, neighbors of a sample often comprise more inter-class data than intra-class data, which is an undesirable effect for classification. In this paper, we address this problem by proposing a subclass-center based manifold preserving projection (SMPP) approach, which aims at preserving the local neighborhood structure of subclass-centers instead of given samples. We theoretically show from a probability perspective that, neighbors of a subclass-center would comprise of more intra-class data than inter-class data, and thus is more desirable for classification. In order to take full advantage of the class separability, we further propose the discriminant SMPP (DSMPP) approach, which incorporates the subclass discriminant analysis (SDA) technique to SMPP. In contrast to related discriminant manifold learning methods, DSMPP is formulated as a dual-objective optimization problem and we present analytical solution to it. Experimental results on the public AR, FERET and CAS-PEAL face databases demonstrate that the proposed approaches are more effective than related manifold learning and discriminant manifold learning methods in classification performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.