Abstract
Building a map representation of the surrounding environment is crucial for the successful operation of autonomous robots. While extensive research has concentrated on mapping geometric structures and static objects, the environment is also influenced by the movement of dynamic objects. Integrating information about spatial motion patterns in an environment can be beneficial for planning socially compliant trajectories, avoiding congested areas, and aligning with the general flow of people. In this paper, we introduce a deep state-space model designed to learn map representations of spatial motion patterns and their temporal changes at specific locations. Thus enabling the robot for human-compliant operation and improved trajectory forecasting in environments with evolving motion patterns. Validation of the proposed method is conducted using two datasets: one comprising generated motion patterns and the other featuring real-world pedestrian data. The model’s performance is assessed in terms of learning capability, mapping quality, and its applicability to downstream robotics tasks. For comparative assessment of mapping quality, we employ CLiFF-Map as a baseline, and CLiFF-LHMP serves as another baseline for evaluating performance in downstream motion prediction tasks. The results demonstrate that our model can effectively learn corresponding motion patterns and holds promising potential for application in robotic tasks.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have