Abstract
Due to the intrinsic long-tailed distribution of objects in the real world, we are unlikely to be able to train an object recognizer/detector with many visual examples for each category. We have to share visual knowledge between object categories to enable learning with few or no training examples. In this paper, we show that local object similarity information--statements that pairs of categories are similar or dissimilar--is a very useful cue to tie different categories to each other for effective knowledge transfer. The key insight: Given a set of object categories which are similar and a set of categories which are dissimilar, a good object model should respond more strongly to examples from similar categories than to examples from dissimilar categories. To exploit this category-dependent similarity regularization, we develop a regularized kernel machine algorithm to train kernel classifiers for categories with few or no training examples. We also adapt the state-of-the-art object detector to encode object similarity constraints. Our experiments on hundreds of categories from the Labelme dataset show that our regularized kernel classifiers can make significant improvement on object categorization. We also evaluate the improved object detector on the PASCAL VOC 2007 benchmark dataset.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Pattern Analysis and Machine Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.