Humans adapt their locomotion seamlessly in response to changes in the body or the environment. It is unclear how such adaptation improves performance measures like energy consumption or symmetry while avoiding falling. Here, we model locomotor adaptation as interactions between a stabilizing controller that reacts quickly to perturbations and a reinforcement learner that gradually improves the controller’s performance through local exploration and memory. This model predicts time-varying adaptation in many settings: walking on a split-belt treadmill (i.e. with both feet at different speeds), with asymmetric leg weights, or using exoskeletons — capturing learning and generalization phenomena in ten prior experiments and two model-guided experiments conducted here. The performance measure of energy minimization with a minor cost for asymmetry captures a broad range of phenomena and can act alongside other mechanisms such as reducing sensory prediction error. Such a model-based understanding of adaptation can guide rehabilitation and wearable robot control.
Read full abstract