Abstract

The state-of-the-art reinforcement learning (RL) techniques have made innumerable advancements in robot control, especially in combination with deep neural networks (DNNs), known as deep reinforcement learning (DRL). In this article, instead of reviewing the theoretical studies on RL, which were almost fully completed several decades ago, we summarize some state-of-the-art techniques added to commonly used RL frameworks for robot control. We mainly review bioinspired robots (BIRs) because they can learn to locomote or produce natural behaviors similar to animals and humans. With the ultimate goal of practical applications in real world, we further narrow our review scope to techniques that could aid in sim-to-real transfer. We categorized these techniques into four groups: 1) use of accurate simulators; 2) use of kinematic and dynamic models; 3) use of hierarchical and distributed controllers; and 4) use of demonstrations. The purposes of these four groups of techniques are to supply general and accurate environments for RL training, improve sampling efficiency, divide and conquer complex motion tasks and redundant robot structures, and acquire natural skills. We found that, by synthetically using these techniques, it is possible to deploy RL on physical BIRs in actuality.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.