Abstract

Conventional object detection models require large amounts of training data. In comparison, humans can recognize previously unseen objects by merely knowing their semantic description. To mimic similar behavior, zero-shot object detection (ZSD) aims to recognize and localize "unseen" object instances by using only their semantic information. The model is first trained to learn the relationships between visual and semantic domains for seen objects, later transferring the acquired knowledge to totally unseen objects. This setting gives rise to the need for correct alignment between visual and semantic concepts so that the unseen objects can be identified using only their semantic attributes. In this article, we propose a novel loss function called "polarity loss" that promotes correct visual-semantic alignment for an improved ZSD. On the one hand, it refines the noisy semantic embeddings via metric learning on a "semantic vocabulary" of related concepts to establish a better synergy between visual and semantic domains. On the other hand, it explicitly maximizes the gap between positive and negative predictions to achieve better discrimination between seen, unseen, and background objects. Our approach is inspired by embodiment theories in cognitive science that claim human semantic understanding to be grounded in past experiences (seen objects), related linguistic concepts (word vocabulary), and visual perception (seen/unseen object images). We conduct extensive evaluations on the Microsoft Common Objects in Context (MS-COCO) and Pascal Visual Object Classes (VOC) datasets, showing significant improvements over state of the art. Our code and evaluation protocols available at: https://github.com/salman-h-khan/PL-ZSD_Release.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call