Abstract
Autonomous Traffic Management (ATM) systems empowered with Machine Learning (ML) technics are a promising solution for eliminating traffic light and decreasing traffic congestion in the future. However, few efforts have focused on integrating pedestrians in ATM, namely the static programming-based cooperative protocol called Autonomous Pedestrian Crossing (APC). In this paper, we model a Markov Decision Process (MDP) to enable a Deep Reinforcement Learning (DRL)-based version of APC protocol that is able to dynamically achieve the same objectives (i.e. decreasing traffic delay at the crossing area). Using concrete state space, action set and reward functions, our model forces the Autonomous Vehicle (AV) to "think" and behave according to APC architecture. Compared to the traditional programming APC system, our approach permits the AV to learn from its previous experiences in non-signalized crossing and optimize the distance and the velocity parameters accordingly.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.