Abstract

Intelligent blending of human and automatic control inputs to collaboratively achieve shared control robotic tasks has received considerable attention in recent years. The benefits of such blending are many, including achieving better performance while maintaining robust situation awareness. Effective modeling of a given task using data obtained from human operator’s demonstration is an important building block for shared control because the model is used as a reference for predicting operator’s intent and generating automatic control input to facilitate task execution. Subgoal-based modeling, in which a complicated task is encoded as a finite number of subgoals, has yielded good results for practical tasks in shared control applications. In this paper, we present a new method for learning subgoals of a task based on human operator’s demonstration. The modeling process involves: (1) extracting distributions of potential subgoals by effectively quantifying human operator’s commands via a unified metric and (2) learning subgoals and their execution sequence from the extracted distributions via a Bayesian non-parametric clustering method with temporal ordering. We apply the proposed method and present the learned subgoals based on two demonstrations: a construction earth-moving task with a hydraulic excavator and an acrobatic flight task with a quadrotor.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call