Abstract

In the near future, robots would be seen in almost every area of our life, in different shapes and with different objectives such as entertainment, surveillance, rescue, and navigation. In any shape and with any objective, it is necessary for them to be capable of successful exploration. They should be able to explore efficiently and be able to adapt themselves with changes in their environment. For successful navigation, it is necessary to recognize the difference between similar places of an environment. In order to achieve this goal without increasing the capability of sensors, having a memory is crucial. In this article, an algorithm for autonomous exploration and obstacle avoidance in an unknown environment is proposed. In order to make our self-learner algorithm, a memory-based reinforcement learning method using multilayer neural network is used with the aim of creating an agent having an efficient exploration and obstacle avoidance policy. Furthermore, this agent can automatically adapt itself to the changes of its environment. Finally, in order to test the capability of our algorithm, we have implemented it in a robot similar to a real model, simulated in the robust physics engine simulator of Gazebo.

Highlights

  • Learning is crucial for intelligence, and a robot that can learn by itself can adapt to its environment and its changes

  • One example of such reproduction is reinforcement learning algorithms[1] where they can be considered as a cognitive structure that can be substrate for intelligent behaviors such as exploration, obstacle avoidance, and navigation

  • Reinforcement learning and multilayer neural network are used in order to make an algorithm capable of learning by itself from its own experiences to explore and avoid obstacle where in this text we call it memory-based multilayer Q-network (MMQN) where it is different from the mentioned reinforcement learning– based works in the aspects that, first, it learns from the scratch in contrast with Tai et al.[11] and Lei and Ming[14] that they initialized their network weights using a previously trained network; second, it is able to adapt itself with an unknown environment or change in its environment autonomously and continuously

Read more

Summary

Introduction

Learning is crucial for intelligence, and a robot that can learn by itself can adapt to its environment and its changes. Reinforcement learning and multilayer neural network are used in order to make an algorithm capable of learning by itself from its own experiences to explore and avoid obstacle where in this text we call it memory-based multilayer Q-network (MMQN) where it is different from the mentioned reinforcement learning– based works in the aspects that, first, it learns from the scratch in contrast with Tai et al.[11] and Lei and Ming[14] that they initialized their network weights using a previously trained network; second, it is able to adapt itself with an unknown environment or change in its environment autonomously and continuously.

Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.