Abstract

AbstractReinforcement learning algorithms are time and resource-intensive and can be influenced by the setup of the physical robot environment and its hardware capabilities. Often, in small-scale projects, it is not workable to build a physical robot equipped with good processing capabilities and to set up a fully controlled and monitored environment for reinforcement learning. During this period, it will be more cost-effective for most of the training to be conducted in a simulated environment and then transferred to a physical robot. In this project, two RL experiments were conducted on a simple two-wheeled robot model in a simulated environment. The first was to make the robot start from a completely fallen down position and learn to stand, and the second to start from a balanced position and learn to maintain the position. It was found that starting from a balanced position gave a better performance, and hence, this learned model was used as a baseline for testing on a physical robot such as the LEGO Mindstorms, but it could be seen that the LEGO hardware was not well suited for this kind of intense reinforcement learning algorithms.KeywordsSelf-balancing robotReinforcement learningSimulation

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.