Abstract

This paper presents methods for 3D object detection and multi-object (or multi-agent) behavior recognition using a sequence of 3D point clouds of a scene taken over time. This motion 3D data can be collected using different sensors and techniques such as flash LIDAR (Light Detection And Ranging), stereo cameras, time-of-flight cameras, or spatial phase imaging sensors. Our goal is to segment objects from the 3D point cloud data in order to construct tracks of multiple objects (i.e., persons and vehicles) and then classify the multi-object tracks as one of a set of known behaviors, such as “A person drives a car and gets out”. A track is a sequence of object locations changing over time and is the compact object-level information we use and obtain from the motion 3D data. Leveraging the rich structure of dynamic 3D data makes many visual learning problems better posed and more tractable. Our behavior recognition method is based on combining the Dynamic Time Warping-based behavior distances from the multiple object-level tracks using a normalized car-centric coordinate system to recognize the interactive behavior of those multiple objects. We apply our techniques for behavior recognition on data collected using a LIDAR sensor, with promising results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call