Abstract

In this paper, we propose a future trajectory prediction framework based on recurrent neural network (RNN) and maximum margin inverse reinforcement learning (IRL) for the task of predicting future trajectories of agents in dynamic scenes. Given the current position of a target agent and the corresponding static scene information, a RNN is trained to produce the next position, which is the closest to the true next position while maximizing the proposed reward function. The reward function is also trained at the same time to maximize the margin between the rewards from the true next position and its estimate. The reward function plays the role of a regularizer when training the parameters of the proposed network so the trained network is able to reason the next position of the agent much better. We evaluated our model on a public KITTI dataset. Experimental results show that the proposed method significantly improves the prediction accuracy compared to other baseline methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call