Abstract

The metric-based learning framework has been widely used in data-scarce few-shot visual classification. However, the current loss function limits the effectiveness of metric learning. One issue is that the nearest neighbor classification technique used greatly narrows the value range of similarity between the query and class prototypes, which limits the guiding ability of the loss function. The other issue is that the episode-based training setting randomizes the class combination in each iteration, which reduces the perception of the traditional softmax losses for effective learning from episodes with various data distributions.To solve these problems, we first review some variants of the softmax loss from a unified perspective, and then propose a novel Dynamically-Scaled Softmax Loss (DSSL). By adding a probability regulator (for scaling probabilities) and a loss regulator (for scaling losses), the loss function can adaptively adjust the prediction distribution and the training weights of the samples, which forces the model to focus on more informative samples. Finally, we found the proposed DSSL strategy for few-shot classifiers can achieve competitive results on four generic benchmarks and a fine-grained benchmark, demonstrating the effectiveness of improving the distinguishability (for base classes) and generalizability (for novel classes) of the learned feature space.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.