Abstract

In recent years, codetecting objects through the use of contextual information across multiple images has attracted considerable attention. We introduce an object codetection method that exploits contextual information among multiple images through a higher-order conditional random field (CRF). First, we obtain object candidates from each image of a test set by using a pretrained detector. Second, we feed the object candidates into a higher-order CRF that captures the appearance similarity using pairwise potentials and object category co-occurrence constraints using higher-order potentials. Finally, we jointly predict the category labels of all object candidates through the mean field inference in the CRF. Experimental results on the Caltech Pedestrian, PASCAL VOC 2007, PASCAL VOC 2012, and COCO datasets demonstrate the effectiveness of the proposed method compared to the baseline method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call