Abstract
Abstract. In dynamic environments, robot path planning and obstacle avoidance are critical tasks, especially in applications such as autonomous driving, industrial automation, and mobile robotics. These tasks are inherently challenging due to the unpredictability of the environment and the need for real-time decision-making. This paper seeks to address these challenges by developing and analyzing both traditional and optimized models for robot navigation. The initial model utilizes a basic Q-learning algorithm, which provides a straightforward approach to learning from the environment but often struggles with the complexity of dynamic scenarios. To this end, an optimized model is developed that combines the Double Deep Q-Learning algorithm (Double DQN) in conjunction with heuristic strategies. The research employs the MATLAB Reinforcement Learning Toolbox to implement and train these models, and utilizes a simulated environment with dynamic obstacles as a testing site. The simulation generates the necessary data to allow for comprehensive testing and evaluation of the models performance. The results show that the optimized model greatly exceeds the initial model in terms of path planning efficiency and obstacle avoidance capabilities, and that the combination of advanced reinforcement learning techniques and heuristic strategies is extremely important for enhancing the performance and reliability of robotic systems in complex, dynamic environments, offering valuable insights for future applications in various fields of robotics.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.