Abstract To identify the underlying mechanisms of human motor control, parametric models are utilized. One approach of employing these models is the inferring the control intent (estimating motor control strategy). A well-accepted assumption is that human motor control is optimal; thus, the intent is inferred by solving an inverse optimal control (IOC) problem. Linear quadratic regulator (LQR) is a well-established optimal controller, and its inverse LQR (ILQR) problem has been used in the literature to infer the control intent of one subject. This implementation used a cost function with gain penalty, minimizing the error between LQR gain and a preliminary estimated gain. We hypothesize that relying on an estimated gain may limit ILQR optimization capability. In this study, we derive an ILQR optimization with output penalty, minimizing the error between the model output and the measured output. We conducted the test on 30 healthy subjects who sat on a robotic seat capable of rotation. The task utilized a physical human–robot interaction with a perturbation torque as input and lower and upper body angles as output. Our method significantly improved the goodness of fit compared to the gain-penalty ILQR. Moreover, the dominant inferred intent was not statistically different between the two methods. To our knowledge, this work is the first that infers motor control intent for a sample of healthy subjects. This is a step closer to investigating control intent differences between healthy subjects and subjects with altered motor control, e.g., low back pain.
Read full abstract