Zero-shot imitation learning has demonstrated its superiority to learn complex robotic tasks with less human participation. Recent studies show convincing performance under the condition that the robot follows the demonstration strictly by the learned inverse model. However, these methods are difficult to achieve satisfactory performance in imitation when the demonstration is suboptimal, and the learning of the learned inverse models is vulnerable to label ambiguity issues. In this paper, we propose Self-Optimal Zero-shot Imitation Learning (SOZIL) to tackle these problems. The contribution of SOZIL is twofold. First, Goal Consistency Loss (GCL) is designed to learn the multi-step goal-conditioned policy from exploration data. By directly using the goal state as supervision, GCL solves the label ambiguity problem caused by trajectory and action diversity. Second, Estimation-based Keyframe Extraction (EKE) is developed to optimize demonstrations. We formulate the keyframe extraction process as a path optimization problem under suboptimal control. By predicting the performance of the learned policy in executing transitions of any two states, EKE creates a directed graph containing all candidate paths and extracts keyframes by solving the graph’s shortest path problem. Furthermore, the proposed method is evaluated with various simulated and real-world robotic manipulating experiments such as cable harness assembly, rope manipulation, and block moving. Experimental results show that SOZIL achieves a superior success rate and manipulation efficiency than baselines.
Read full abstract