Abstract
In this article, a mapless movement policy for mobile agents, designed specifically to be fault-tolerant, is presented. The provided policy, which is learned using deep reinforcement learning, has advantages compared to the usual mapless policies: this policy is capable of handling a robot even when some of its sensors are broken. It is an end-to-end policy based on three neuronal models capable not only of moving the robot and maximizing the coverage of the environment but also of learning the best movement behavior to adapt it to its perception needs. A custom robot, for which none of the readings of the sensors overlap each other, has been used. This setup makes it possible to determine the operation of a robust failure policy, since the failure of a sensor unequivocally affects the perceptions. The proposed system exhibits several advantages in terms of robustness, extensibility and utility. The system has been trained and tested exhaustively in a simulator, obtaining very good results. It has also been transferred to real robots, verifying the generalization and the good functioning of our model in real environments.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.