Abstract

This letter combines an combines an imitation learning approach with a model-based and constraint-based task specification and control methodology. Imitation learning provides an intuitive way for the end user to specify context of a new robot application without the need of traditional programming skills. On the other hand, constraint-based robot programming allows us to define complex tasks involving different kinds of sensor input. Combination of both enables adaptation of complex tasks to new environments and new objects with a small number of demonstrations. The proposed method uses a statistical uni-modal model to describe the demonstrations in terms of a number of weighted basis functions. This is then combined with model-based descriptions of other aspects of the task at hand. This approach was tested in a use case inspired by an industrial application, in which the required transfer motions were learned from a small number of demonstrations, and gradually improved by adding new demonstrations. Information on a collision-free path was introduced through a small number of demonstrations. The method showed a high level of composability with force and vision controlled tasks. The use case showed that the deployment of a complex constraint-based task with sensor interactions can be expedited using imitation learning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call