Abstract
We present a scalable object tracking framework which is capable of tracking the contour of rigid and non-rigid objects in the presence of occlusion. The method adaptively divides the object contour into sub-contours, and employs several low-level features such as color edge, color segmentation, motion models, motion segmentation, and shape continuity information in a feedback loop to track each sub-contour. We also introduce some novel performance evaluation measures to evaluate the goodness of the segmentation and tracking. The results of these performance measures are utilized in a feedback loop to adjust the weights assigned to each of these low-level features for each sub-contour at each frame. The framework is scalable because it can be adapted to roughly track simple objects in real-time as well as pixel-accurate tracking of more complex objects in offline mode. The proposed method does not depend on any single motion or shape model, and does not need training. Experimental results demonstrate that the algorithm is able to track the object boundaries accurately under significant occlusion and background clutter.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.