Abstract
Deep metric learning (DML) has been designed to maximize the inter-class variance that is the distance between embedding features belonging to different classes. Since conventional DML techniques do not consider the statistical characteristics of the embedding space, or they calculate similarity using only a given feature, they make it difficult to adaptively reflect the characteristics of the feature distribution during the learning process. This paper proposes a virtual metric loss (VML) incorporating with embedding features by using virtual samples produced through linear discriminant analysis (LDA). This study is valuable in that it proposes a new metric that can learn inter-class variance of embedding features by integrating discriminant analysis and metric learning which have a common purpose of inter-class variance analysis. In addition, we theoretically analyze the eigenvalue equation problem and the degree of stabilization in the embedding space. We have verified the performance of the proposed VML through extensive experiments on large and few-shot retrieval datasets. For example, in the CUB200-2011 dataset, the VML showed a recall rate about 0.7% higher than a state-of-the-art method. We also explored a new similarity through virtual samples and adjusted the difficulty of embedding features, thereby confirming the possibility of expanding virtual samples into various fields of pattern recognition.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.