Abstract

Decision making under uncertainty is a key challenge in automated driving. A source of uncertainty is the unknown driving behavior of other traffic participants surrounding the automated vehicle. To take this uncertainty into account, we formulate the planning problem as a Partially Observable Markov Decision Process (POMDP) and solve it with the state-of-the-art sampling-based POMCP(OW) solver. Within planning, a derivative of the Intelligent Driver Model (IDM) is used to predict other traffic participants. For evaluation, the POMDP planner runs in a closed-loop simulation environment where the other traffic participants are controlled by machine-learned driver models. The different models used in planning and simulation constitute a true prediction model error. Compared to a classical search-based planner (and reactive driver models), the POMDP leads to less critical situations while retaining the comfort and efficiency of foresighted planning. Both planning approaches apply a particle filter to estimate others' driving characteristics-but only the POMDP approach is able to consider the whole range of possible characteristics. Several variants of the POMDP solver were evaluated, allowing for different weighting of safety vs. comfort/efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call