Abstract

Robotic manipulation often requires adaptation to changing environments. Such changes can be represented by a certain number of contextual variables that may be observed or sensed in different manners. When learning and representing robot motion—usually with movement primitives, it is desirable to adapt the learned behaviors to the current context. Moreover, different actions or motions can be considered in the same framework, using contextualization to decide which action applies to which situation. Such frameworks, however, may easily become large dimensional; thus, requiring to reduce the dimensionality of the parameters space, as well as the amount of data needed to generate and improve the model over experience. In this letter, we propose an approach to obtain a generative model from a set of actions that share a common feature. Such feature, namely a contextual variable, is plugged into the model to generate motion. We encode the data with a Gaussian Mixture Model in the parameter space of probabilistic movement primitives, after performing dimensionality reduction on such parameter space. We append the contextual variable to the parameter space and obtain the number of Gaussian components, i.e., different actions in a dataset, through persistent homology. Then, using multimodal Gaussian mixture regression we can retrieve the most likely actions given a contextual situation and execute them. After actions are executed, we use reward-weighted responsibility GMM update the model after each execution. Experimentation in three scenarios shows that the method drastically reduces the dimensionality of the parameter space; thus, implementing both action selection and adaptation to a changing situation in an efficient way.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call