Abstract

As a core function of autonomous driving and the internet of vehicles, accurately predicting the trajectory of vehicles can significantly improve traffic safety and reduce crash injuries. In this paper, we propose an intention-aware non-autoregressive Transformer model with multi-attention learning for multi-modal vehicle trajectory prediction. We first present social attention learning where graph attention is properly integrated with the Transformer encoder so as to model the social interaction between vehicles. Then, the social and temporal dependency across consecutive frames is captured by temporal attention learning. The above social and temporal attention modules can be interleaved and stacked to achieve the coupled modeling and thus extract abundant features from trajectory data. To implement precise prediction as well as efficient inference, we further put forward an intention-aware decoder query generation approach to produce multiple possible trajectories concurrently. Finally, cross-attention learning is devised to make full use of the encoded features, therefore, yielding future predictions. The proposed model is evaluated on two large-scale vehicle trajectory datasets and the experimental results verify that our algorithm achieves better performance compared with some state-of-the-art models. The root-mean-square error (RMSE) of the predicted trajectory over 5s time horizon for the NGSIM and HighD datasets is 3.43m and 1.10m, respectively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.