Abstract

Generally, the bottom-up learning approaches, such as neural-networks, are implemented to obtain the optimal controller of target task for mechanical system. However, they must face a problem including huge number of trials, which require much time and give stress against the hardware. To avoid such issues, a simulator is often built and performed with a learning method. However, there are also problems that how simulator is constructed and how accurate it performs. In this study, we are considering a construction of simulator directly from the actual robot. Afterward a constructed simulator is used for learning target task and the obtained optimal controller is applied to the actual robot. In this work, we picked up a five-linked manipulator robot, and made it track a ball as a task. Construction of a simulator is performed by neural-networks with back-propagation method, and the optimal controller is obtained by reinforcement learning method. Both processes are implemented without using the actual robot after the data sampling, therefore, load against the hardware gets sufficiently smaller, and the objective controller can be obtained faster than using only actual one. And we consider that our proposed method can be a basic and versatile learning strategy to obtain the optimal controller of mechanical systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.