Abstract
The safe and efficient navigation of mobile robots in the presence of unknown dynamic obstacles remains a complex and unresolved challenge. This paper presents collision-free path planning for a mobile robot that safely deals with multi-directional obstacles, that is, randomly moving dynamic obstacles, using a Deep Reinforcement Learning (DRL) algorithm named Deep Q-Network (DQN) with inflated robot reward functions. The robot moves in a time-efficient and collision-free route while maintaining a safe distance with both static and unpredictable dynamic obstacles. The modified DQN algorithm takes RGB images of the environment as input for training a Convolution Neural Network (CNN) and provides a safe and short path for navigation. The robot used for training is an omni-wheeled mobile robot exploring outdoors, that is, concourse environment, and indoors, that is, home environment. The Closed-Loop Inverse Kinematics (CLIK) algorithm is employed to control a mobile robot to follow the desired path. The simulation results indicate that the proposed algorithm with inflated robot reward functions demonstrates remarkable performance as compared to recently used Reinforcement Learning (RL) algorithms when dealing with both stationary and randomly moving obstacles in the given environment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.