Runners naturally adopt a stride frequency closely corresponding with the stride frequency that minimizes energy consumption. Although the concept of self-optimization is well recognized, we lack mechanistic insight into the association between stride frequency and energy consumption. Altering stride frequency affects lower extremity joint power; however, these alterations are different between joints, possibly with counteracting effects on the energy consumption during ground contact and swing. Here, we investigated the effects of changing stride frequency from a joint-level perspective. Seventeen experienced runners performed six running trials at five different stride frequencies (preferred stride frequency (PSF) twice, PSF ± 8%, PSF ± 15%) at 12 km·h-1. During each trial, we measured metabolic energy consumption and muscle activation, and collected kinematic and kinetic data, which allowed us to calculate average positive joint power using inverse dynamics. With decreasing stride frequency, average positive ankle and knee power during ground contact increased (P < 0.01), whereas average positive hip power during leg swing decreased (P < 0.01). Average soleus muscle activation during ground contact also decreased with increasing stride frequency (P < 0.01). In addition, the relative contribution of positive ankle power to the total positive joint power during ground contact decreased (P = 0.01) with decreasing stride frequency, whereas the relative contribution of the hip during the full stride increased (P < 0.01) with increasing stride frequency. Our results provide evidence for the hypothesis that the optimal stride frequency represents a trade-off between minimizing the energy consumption during ground contact, associated with higher stride frequencies, without excessively increasing the cost of leg swing or reducing the time available to produce the necessary forces.
Read full abstract