Abstract

Collaborative robots are set to play an important role in the future of the manufacturing industry. They need to be able to work outside of the fencing and perform new tasks to individual customer specifications. The necessity of frequent robot re-programming is a great challenge for small and medium sized companies alike. Learning from demonstration is a promising approach that aims to enable robots to acquire from their end users new task knowledge consisting of a sequence of actions, the associated skills, and the context in which the task is executed. Current systems have limited support for integrating semantics and environmental changes. This paper introduces a system combining several modalities as demonstration interfaces, including natural language instruction, visual observation and hand-guiding, which enables the robot to learn a task comprising a goal concept, a plan and basic actions, with consideration for the current environment state. The task thus learned can then be generalized to similar tasks involving different initial and goal states.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.