Abstract

In this paper, we address the problem of using boosting to detect different classes of objects with significant intra-class variation. Current approaches generally require prior knowledge such as semantic information to obtain an explicit partition over positive samples. This can require a lot of manually labeled samples and can limit the performance of classifier due to subjective error. We present a novel JointBoost based learning method which can learn the variant appearances of targets without any prior assumption. With an implicit partition, the specific features for fractional samples can contribute to classification as well as generic features for the whole training set. By encouraging an intra-class and inter-class feature sharing between implicit sub-categories, our data-driven learning approach avoids a local optimum in the candidate weak classifier space. Experimental results on two popular tasks demonstrate the considerable improvements brought by the new approach. We hope that implicit JointBoost will extend the application of traditional boosting methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call