Abstract

The problem of processing point cloud sequences is considered in this work. In particular, a system that represents and tracks objects in dynamic scenes acquired using low-cost sensors such as the Kinect is presented. An efficient neural network-based approach is proposed to represent and estimate the motion of 3D objects. This system addresses multiple computer vision tasks such as object segmentation, representation, motion analysis and tracking. The use of a neural network allows the unsupervised estimation of motion and the representation of objects in the scene. This proposal avoids the problem of finding corresponding features while tracking moving objects. A set of experiments are presented that demonstrate the validity of our method to track 3D objects. Moreover, an optimization strategy is applied to achieve real-time processing rates. Favorable results are presented demonstrating the capabilities of the GNG-based algorithm for this task. Some videos of the proposed system are available on the project website (http://www.dtic.ua.es/~sorts/3d_object_tracking/).

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.