Abstract
In the past few years, the field of autonomous robot has been rigorously studied and non-industrial applications of robotics are rapidly emerging. One of the most interesting aspects of this field is the development of the learning ability which enables robots to autonomously adapt to given environments without human guidance. As opposed to the conventional methods of robots’ control, where human logically design the behavior of a robot, the ability to acquire action strategies through some learning processes will not only significantly reduce the production costs of robots but also improves the applicability of robots in wider tasks and environments. However, learning algorithms usually require large calculation cost, which make them unsuitable for robots with limited resources. In this study, we propose a simple two-layered neural network that implements a novel and fast Reinforcement Learning. The proposed learning method requires significantly less calculation resources, hence is applicable to small physical robots running in the real world environments. For this study, we built several simple robots and implemented the proposed learning mechanism to them. In the experiments, to evaluate the efficacy of the proposed learning mechanism, several robots were simultaneously trained to acquire obstacle avoidance strategies in the same environment, thus, forming a dynamic environment where the learning task is substantially harder than in the case of learning in a static environment and promising result was obtained.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.