Abstract

Intelligent physical skills are a fundamental element needed by robots to interact with the real world. Instead of learning from individual sources in single cases, continuous robot learning from crowdsourced mentors over long terms provides a practical path towards realizing ubiquitous robot physical intelligence. The mentors can be human drivers that teleoperate robots when their intelligence is not yet enough for acting autonomously. A large amount of sensorimotor data can be obtained constantly from a group of teleoperators, and processed by machine learning to continuously generate and improve the autonomous physical skills of robots. This paper presents a learning method that utilizes state space discretization to sustainably manage constantly collected data and synthesize autonomous robot skills. Two types of state space discretization have been proposed. Their advantages and limits are examined and compared. Simulation and physical tests of two object manipulation challenges are conducted to examine the proposed learning method. The capability of handling system uncertainty, sustainably managing high-dimensional state spaces, as well as synthesizing new skills or ones that have only been partly demonstrated are validated. The work is expected to provide a long-term and big-scale measure to produce advanced robot physical intelligence.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.