Abstract

Motion prediction from raw LiDAR sensor data has drawn increasing attention and led to a surge of studies following two main paradigms. One paradigm is global motion paradigm, which simultaneously detects objects from point clouds and predicts the trajectories of each object in the future. The other paradigm is local motion paradigm, which directly performs dense motion prediction pointwisely. We observe that global motion prediction can benefit from local motion representation, since it contains rich local displacement contexts that are not explicitly exploited in global motion prediction. Correspondingly, local motion prediction can benefit from global motion representation, since it provides object contexts to improve prediction consistency inside an object. However, the complement of these two motion representations has not fully explored in the literature. To this end, we propose Hybrid Motion Representation Learning (HyMo), a unified framework to address the problem of motion prediction by making the best of both global and local motion cues. We have conducted extensive experiments on nuScenes dataset. The experimental results demonstrate that the learned hybrid motion representation achieves state-of-the-art performance on both global and local motion prediction tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call