Many safety-critical or performance-demanding systems are human-in-the-loop, i.e., the robot interacts with human being and environment, in which human-in-the-loop control becomes a key research topic. In this paper, a unified optimal interaction control in joint space is presented for multi-point human-robot-environment interaction (HREI) problems, which are very common in human-robot collaborative manipulation tasks. Specifically, model-based reinforcement learning method is leveraged for obtaining optimal interaction control. For multi-point human-robot-environment interaction, the interaction forces exerted on each link are isolated and estimated via the backward generalized momentum observer method. In human-robot-environment interaction problems, the environmental as well as the human arm's dynamics parameters are usually unknown, stochastic, and time-varying. To obviate the dependence on these parameters, the GMM/GMR method is employed to learn the unknown external dynamics. It is noteworthy the interaction forces are considered as system states, the time derivatives of which are computed based on the GMM/GMR learning results via chain-rule. Then the iterative linear quadratic Gaussian with learned external dynamics (ILQG-LED) method is utilized to realize optimal multi-point human-robot-environment interaction control. The validity of the proposed method is verified through experimental studies.
Read full abstract