Abstract

The Bounded Hough Transform is introduced to track objects in a sequence of sparse range images. The method is based upon a variation of the General Hough Transform that exploits the coherence across image frames that results from the relationship between known bounds on the object's velocity and the sensor frame rate. It is extremely efficient, running in O(N) for N range data points, and effectively trades off localization precision for runtime efficiency. The method has been implemented and tested on a variety of objects, including freeform surfaces, using both simulated and real data from Lidar and stereovision sensors. The motion bounds allow the inter-frame transformation space to be reduced to a reasonable, and indeed small size, containing only 729 possible states. In a variation, the rotational subspace is projected onto the translational subspace, which further reduces the transformation space to only 54 states. Experimental results confirm that the technique works well with very sparse data, possibly comprising only tens of points per frame, and that it is also robust to measurement error and outliers.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.