Abstract

Visual tracking of objects subjected to non-linear motion and appearance changes has shown to be a difficult task in computer vision. While research in visual object tracking has progressed significantly in terms of robust tracking of objects subjected to non-linear motion and appearance changes, these algorithms has shown limited capability for long term tracking of handheld objects during human-object interactions. The failure in tracking is a consequence of abrupt changes in the handheld object motion resulting in tracker drifting off the optimal object space. In this paper, we present a novel 3 layer RGB-D image model formulated with Bayesian filters that tracks handheld object using near constant velocity motion model. Our method divides the image into three layers of abstraction where each encodes visual information of environment, human, object and contributes toward precise localization of the handheld object during tracking. A boundary re-alignment step is introduced during tracking such that the tracker predicted object region is re-aligned to the optimal object region, therefore reducing the likelihood of tracker drifting off the object space. This compensation of the tracker prediction offset enables our algorithm to robustly track handheld object subjected to abrupt changes in motion during manipulation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.