Abstract
Learning to navigate in an unknown environment is a crucial capability of mobile robot. Conventional method for robot navigation consists of three steps, involving localization, map building and path planning. However, most of the conventional navigation methods rely on obstacle map, and do not have the ability of autonomous learning. In contrast to the traditional approach, we propose an end-to end approach which uses deep reinforcement learning for the navigation of mobile robots in an unknown environment. The model is trained with deep reinforcement techniques using Q-Learning algorithm. Through Q-Learning algorithm, mobile robot can learn the environment gradually through its wonder and learn to navigate to the target destination. The experimental result shows that mobile robot can reach to the desired targets without colliding with any obstacles. In future the same can be enhanced to implement the object classifying methods to understand traffic signal by the robot in order to travel in roads. Further it can be extended to create a self-driving car. Cascade classifier is used for classifying the data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Turkish Journal of Computer and Mathematics Education (TURCOMAT)
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.