Abstract

Although deep learning methods have drastically improved the performance on visual recognition tasks in which large inter-class variances exist, similar-class recognition continues to pose significant challenges, mainly due to the close resemblance between similar classes. The challenge is further compounded in the case of few-shot learning because only a very small amount of training data is available; accordingly, a certain performance degradation has been observed when some few-shot methods are applied for classification tasks. To address the aforementioned issue, we propose a novel Relation Separation Network (RSNet) in this paper, aiming to boost few-shot learning by improving similar-class recognition performance. We assume that image features consist of common and private features, where the common features capture the basic attributes shared among similar classes and their private counterparts capture the unique attributes of each class. Our RSNet learns to decouple the common and private features of an image. As a result, the feature representation of an image is composed of two weakly associated but easily aligned components, and better classification performance is achieved by giving more attention to subtle features. Experimental results on the publicly available datasets miniImageNet, CUB, and CIFAR-FS show that the proposed model outperforms existing state-of-the-art methods. Specifically, compared to PT+MAP, RSNet improves the accuracy of classification on the CUB dataset by approximately 5% and that of similar-class classification by more than 10%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.