Abstract
Autonomous vehicles are cyber-physical systems that combine embedded computing and deep learning with physical systems to perceive the world, predict future states, and safely control the vehicle through changing environments. The ability of an autonomous vehicle to accurately predict the motion of other road users across a wide range of diverse scenarios is critical for both motion planning and safety. However, existing motion prediction methods do not explicitly model contextual information about the environment, which can cause significant variations in performance across diverse driving scenarios. To address this limitation, we propose CASTNet : a dynamic, context-aware approach for motion prediction that (i) identifies the current driving context using a spatio-temporal model, (ii) adapts an ensemble of motion prediction models to fit the current context, and (iii) applies novel trajectory fusion methods to combine predictions output by the ensemble. This approach enables CASTNet to improve robustness by minimizing motion prediction error across diverse driving scenarios. CASTNet is highly modular and can be used with various existing image processing backbones and motion predictors. We demonstrate how CASTNet can improve both CNN-based and graph-learning-based motion prediction approaches and conduct ablation studies on the performance, latency, and model size for various ensemble architecture choices. In addition, we propose and evaluate several attention-based spatio-temporal models for context identification and ensemble selection. We also propose a modular trajectory fusion algorithm that effectively filters, clusters, and fuses the predicted trajectories output by the ensemble. On the nuScenes dataset, our approach demonstrates more robust and consistent performance across diverse, real-world driving contexts than state-of-the-art techniques.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.