Abstract

Zero-shot object detection (ZSD) aims to locate and recognize novel objects without additional training samples. Most existing methods usually map visual features to semantic space, resulting in a hubness problem, and learning an effective feature mapping between the two modalities remains a considerable challenge. In this work, we propose a novel end-to-end framework, Semantic-Visual Auto-Encoder (SVAE) network, to tackle the above issues. Distinct from previous works that utilize fully-connected layers to learn the feature mapping, we implement a 1-dimensional convolution with various shared filters to construct the auto-encoder, which maps semantic features to visual space to alleviate the hubness problem. Specifically, we design a novel loss function, Softplus Margin Focal Loss (SMFL), for object classification channel to align the projected semantic features in visual space and address the class imbalance problem. The SMFL improves the discrimination of projections on positive and negative categories and maintains the property of focal loss. Besides, to promote the localization performance for novel objects, we also provide semantic information for object localization channel and utilize a trainable matrix to align the semantic-visual mapping, considering noises in semantic representations. We conduct extensive experiments on four challenging benchmarks. The experimental results show the competitive performances compared with state-of-the-art approaches. Especially, we achieve 8.39%/6.58% mean average precision (mAP) improvements for ZSD/general-ZSD on Microsoft COCO benchmark.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call