Abstract
Reinforcement learning (RL) has been widely used in the decision-making of autonomous vehicles (AVs) in recent studies. However, existing RL methods generally find the optimal policy by maximizing the expectation of future returns, which lacks distributional treatments of risky situations. Additionally, various uncertainties arising from the environment could also cause unreliable decisions, particularly in some complex urban environments. In this paper, the fully parameterized quantile network (FPQN) is utilized to estimate the full return distribution. Then, the conditional value-at-risk (CVaR) is utilized with the return distribution information to generate uncertainty-aware driving behavior. Additionally, an uncontrolled four-way intersection is developed by the Simulation of Urban Mobility (SUMO) simulation platform, which considers both the surrounding vehicles (SVs) and pedestrians. More specifically, to simulate the real-world traffic environment, the uncertainty arising from the occlusion, and the behavior uncertainty of surrounding traffic participants are also considered. The experiment results suggest that the proposed method outperforms the baseline methods in terms of safety. Furthermore, the results also indicate that the proposed method can make reasonable decisions in some challenging driving cases in the presence of uncertainty.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Intelligent Transportation Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.