Abstract
Deep metric learning has increasingly captured the interest of the research community in recent years, mainly due to its effectiveness in integrating distance metric learning with deep neural networks. Despite the existence of various approaches, such as pair-based angular loss functions or spherical embedding constraints, these methods often necessitate extra trainable parameters and overlook class similarities. This paper presents two approaches: ‘In-batch feature vector constraint’ and ‘Unsupervised label integration.’ These methods are notable for considering the similarities between different classes. Because of their high compatibility, these techniques can be seamlessly integrated with various loss functions. Comprehensive experimental evaluations encompassing four image classification datasets and seven network architectures have demonstrated the effectiveness of these proposed methods in enhancing network performance.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have