Abstract

AbstractThe advancement of object detection (OD) in open‐vocabulary and open‐world scenarios is a critical challenge in computer vision. OmDet, a novel language‐aware object detection architecture and an innovative training mechanism that harnesses continual learning and multi‐dataset vision‐language pre‐training is introduced. Leveraging natural language as a universal knowledge representation, OmDet accumulates “visual vocabularies” from diverse datasets, unifying the task as a language‐conditioned detection framework. The multimodal detection network (MDN) overcomes the challenges of multi‐dataset joint training and generalizes to numerous training datasets without manual label taxonomy merging. The authors demonstrate superior performance of OmDet over strong baselines in object detection in the wild, open‐vocabulary detection, and phrase grounding, achieving state‐of‐the‐art results. Ablation studies reveal the impact of scaling the pre‐training visual vocabulary, indicating a promising direction for further expansion to larger datasets. The effectiveness of our deep fusion approach is underscored by its ability to learn jointly from multiple datasets, enhancing performance through knowledge sharing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call