Abstract

Neural network optimization relies on the ability of the loss function to learn highly discriminative features. In recent years, Softmax loss has been widely used to train neural network models in various tasks. In order to further enhance the discriminative power of the learned features, Center loss is introduced as an auxiliary function to aid Softmax loss jointly reduce the intra-class variances. In this paper, we propose a novel loss called Differentiable Magnet loss (DML), which can optimize neural nets independently of Softmax loss without joint supervision. This loss offers a more definite convergence target for each class, which not only allows the sample to be close to the homogeneous (intra-class) center but also to stay away from all heterogeneous (inter-class) centers in the feature embedding space. Extensive experimental results demonstrate the superiority of DML in a variety of classification and clustering tasks. Specifically, the 2-D visualization of the learned embedding features by t-SNE effectively proves that our proposed new loss can learn better discriminative representations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.