Abstract

Predicting future behavior and positions of other traffic participants from observations is a key problem that needs to be solved by human drivers and automated vehicles alike to safely navigate their environment and to reach their desired goal. In this paper, we expand on previous work on an automotive environment model based on vector symbolic architectures (VSAs). We investigate a vector-representation to encapsulate spatial information of multiple objects based on a convolutive power encoding. Assuming that future positions of vehicles are influenced not only by their own past positions and dynamics (e.g., velocity and acceleration) but also by the behavior of the other traffic participants in the vehicle's surroundings, our motivation is 3-fold: we hypothesize that our structured vector-representation will be able to capture these relations and mutual influence between multiple traffic participants. Furthermore, the dimension of the encoding vectors remains fixed while being independent of the number of other vehicles encoded in addition to the target vehicle. Finally, a VSA-based encoding allows us to combine symbol-like processing with the advantages of neural network learning. In this work, we use our vector representation as input for a long short-term memory (LSTM) network for sequence to sequence prediction of vehicle positions. In an extensive evaluation, we compare this approach to other LSTM-based benchmark systems using alternative data encoding schemes, simple feed-forward neural networks as well as a simple linear prediction model for reference. We analyze advantages and drawbacks of the presented methods and identify specific driving situations where our approach performs best. We use characteristics specifying such situations as a foundation for an online-learning mixture-of-experts prototype, which chooses at run time between several available predictors depending on the current driving situation to achieve the best possible forecast.

Highlights

  • The race to autonomous driving is currently one of the main forces for pushing research forward in the automotive domain

  • The long short-term memory (LSTM) models are implemented in Tensorflow (Abadi et al, 2016) whereas the neural engineering framework (NEF) models and the mixture-of-experts online learning model are implemented using the Nengo software suite (Bekolay et al, 2014)

  • We showed a novel approach to encapsulate spatial information of multiple objects in a sequence of semantic pointers of fixed vector length

Read more

Summary

Introduction

The race to autonomous driving is currently one of the main forces for pushing research forward in the automotive domain. Predicting future behavior and positions of other traffic participants from observations is essential for collision avoidance and safe motion planning, and needs to be solved by human drivers and automated vehicles alike to reach their desired goal. Motion prediction for intelligent vehicles in general has seen extensive research in recent years (Polychronopoulos et al, 2007; Lawitzky et al, 2013; Lefèvre et al, 2014; Schmüdderich et al, 2015) as it is a cornerstone for collision-free automated driving. Lefèvre et al (2014) classify such prediction approaches into three categories, namely physics-based, maneuver-based, and interaction-aware, depending on their level of abstraction. There exist a growing number of different interaction-aware approaches to account for those dependencies and mutual influences between traffic participants or, more generally, agents in the scene. Classification approaches categorize and represent scenes in a hierarchy (Bonnin et al, 2012) based on the most generic ones to predict behavior for a variety of different situations

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call