Abstract

When it concerns autonomous traffic management, the most effective decision-making reinforcement learning methods are often utilized for vehicle control. Surprisingly demanding circumstances, however, aggravate the collisions and, as a consequence, the chain collisions. In order to potentially offer guidance on eliminating and decreasing the danger of chain collision malfunctions, we first evaluate the main types of chain collisions and the chain events typically proceed. In an emergency, this study proposes mobile-integrated deep reinforcement learning (DRL) for autonomous vehicles to control collisions. Three essential influencing substances are completely taken into consideration and ultimately achieved by the offered strategy: accuracy, efficiency, and passenger comfort. Following this, we investigate the safety performance currently employed in security-driving solutions by interpreting the chain collision avoidance problem as a Markov Decision Process problem and offering a decision-making strategy based on mobile-integrated reinforcement learning. All of the analysis's findings have the objective of aid academics and policymakers to appreciate the positive aspects of a more reliable autonomous traffic infrastructure and to smooth out the way for the actual adoption of a driverless traffic scenario.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call