Abstract

We propose a method for learning the Linear Discriminant Analysis (LDA) using a Siamese Neural Network (SNN) architecture for learning a low dimensional image descriptor. The novelty of our work is that we learn the LDA projection matrix between the final fully-connected layers of an SNN. An SNN architecture is used since the proposed loss maximizes the Kullback-Leibler divergence between the feature distributions from the two branches of an SNN. The network learns an optimized feature space having inherent properties pertaining to the learning of LDA. The learned image descriptors are a) low-dimensional, b) have small intra-class variance, c) large inter-class variance, and d) can distinguish the classes with linear decision hyperplanes. The proposed method has the advantage that LDA learning happens end-to-end. We measured the classification accuracy in the three datasets MNIST, CIFAR-10, and STL-10 and compared the performance with other state-of-the-art methods. We also measured the KL divergence between the class pairs and visualized the projections of feature vectors along the learned discriminant directions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.