Abstract

In the future, autonomous vehicles are expected to safely move people and cargo around. However, as of now, automated entities do not necessarily outperform human drivers under all circumstances, particularly under certain road and environmental factors such as bright light, heavy rain, poor quality of road and traffic signs, etc. Therefore, in certain conditions it is safer for the driver to take over the control of the vehicle. However, switching control back and forth between the human driver and the automated driving entity may itself pose a short-term, elevated risk, particularly because of the out of the loop (OOTL) issue for humans. In this study, we develop a mathematical framework to determine the optimal driving-entity switching policy between the automated driving entity and the human driver. Specifically, we develop a Markov decision process (MDP) model to prescribe the entity in charge to minimize the expected safety cost of a trip, considering the dynamic changes of the road/environment during the trip. In addition, we develop a partially observable Markov decision process (POMDP) model to accommodate the fact that the risk posed by the immediate road/environment may only be partially observed. We conduct extensive numerical experiments and thorough sensitivity and robustness analyses, where we also compare the expected safety cost of trips under the optimal and single driving entity policies. In addition, we quantify the risks associated with the policies, as well as the impact of miss-estimating road/environment condition risk level by the driving entities, and provide insights. The proposed frameworks can be used as a policy tool to identify factors that can render a region suitable for level four autonomy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call