Abstract

A method is presented for tracking 3D objects as they transform rigidly in space within a sparse range image sequence. The method operates in discrete space and exploits the coherence across image frames that results from the relationship between known bounds on the object's velocity and the sensor frame rate. These motion bounds allow the interframe transformation space to be reduced to a reasonable and indeed tiny size, comprising only tens or hundreds of possible states. The tracking problem is in this way cast into a classification framework, effectively trading off localization precision for runtime efficiency and robustness. The method has been implemented and tested extensively on a variety of freeform objects within a sparse range data stream comprising only a few hundred points per image. It has been shown to compare favorably against continuous domain Iterative Closest Point (ICP) tracking methods, performing both more efficiently and more robustly. A hybrid method has also been implemented that executes a small number of ICP iterations following the initial discrete classification phase. This hybrid method is both more efficient than the ICP alone and more robust than either the discrete classification method or the ICP separately.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call