Abstract

It is well known that the human postural control system responds to motion of the visual scene, but the implicit assumptions it makes about the visual environment and what quantities, if any, it estimates about the visual environment are unknown. This study compares the behavior of four models of the human postural control system to experimental data. Three include internal models that estimate the state of the visual environment, implicitly assuming its dynamics to be that of a linear stochastic process (respectively, a random walk, a general first-order process, and a general second-order process). In each case, all of the coefficients that describe the process are estimated by an adaptive scheme based on maximum likelihood. The fourth model does not estimate the state of the visual environment. It adjusts sensory weights to minimize the mean square of the control signal without making any specific assumptions about the dynamic properties of the environmental motion. We find that both having an internal model of the visual environment and its type make a significant difference in how the postural system responds to motion of the visual scene. Notably, the second-order process model outperforms the human postural system in its response to sinusoidal stimulation. Specifically, the second-order process model can correctly identify the frequency of the stimulus and completely compensate so that the motion of the visual scene has no effect on sway. In this case the postural control system extracts the same information from the visual modality as it does when the visual scene is stationary. The fourth model that does not simulate the motion of the visual environment is the only one that reproduces the experimentally observed result that, across different frequencies of sinusoidal stimulation, the gain with respect to the stimulus drops as the amplitude of the stimulus increases but the phase remains roughly constant. Our results suggest that the human postural control system does not estimate the state of the visual environment to respond to sinusoidal stimuli.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.