Moving objects in the environment have a higher priority and more challenges in growing domains like unmanned vehicles and intelligent robotics. Estimating the motion state of objects based on point clouds in outdoor scenarios is currently a challenging area of research. This is due to factors such as limited temporal information, large volumes of data, extended network processing times, and the ego-motion. The number of points in a point cloud frame is typically 60,000–120,000 points, but most current motion state estimation methods for point clouds only downsample to a few thousand points for fast processing. The downsampling step will lead to the loss of scene information, which means these methods are far from being used in practical applications. Thus, this paper proposes a motion state estimation method that combines spatio-temporal constraints and deep learning. It starts by estimating and compensating the ego-motion of multi-frame point cloud data and mapping multi-frame data to a unified coordinate system; then the point cloud motion segmentation model on the multi-frame point cloud is proposed for motion object segmentation. Finally, spatio-temporal constraints are utilized to correlate the moving object at different moments and estimate the motion vectors. Experiments on KITTI, nuScenes, and real captured data show that the proposed method has good results, with an average vector deviation of only 0.036 m and 0.043 m in KITTI and nuScenes under a processing time of about 80 ms. The EPE3D error under the KITTI data is only 0.076 m, which proves the effectiveness of the method.
Read full abstract