Abstract
This paper presents an enhanced Q-learning framework for autonomous self-driving agents navigating dynamic grid-based environments. The proposed method addresses key challenges in autonomous navigation, such as real-time decision-making, obstacle avoidance, and efficient path planning in environments with static and dynamic obstacles. Unlike traditional approaches, this framework incorporates moving obstacles with randomized or predefined patterns, simulating real-world scenarios like pedestrian movements or other vehicles on the road. A modified reward mechanism is introduced, heavily penalizing collisions while incentivizing efficient and safe navigation toward the goal. The agent is trained using reinforcement learning principles, where its policy evolves through exploration and exploitation strategies, ensuring adaptability to complex environments. The framework also features real-time visualization, offering an intuitive representation of agent behaviours, obstacle dynamics, and learning progression. The experimental findings demonstrate considerable gains in convergence speed, obstacle avoidance efficiency, and flexibility when compared to the baseline Q-learning techniques. This study emphasises the potential of Q-learning in dynamic, developing contexts, opening the door for its use in real-world autonomous systems like as robots and self-driving automobiles.. Keywords: Q-learning, Reinforcement Learning, Dynamic Grids and Autonomous Driving
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have