Abstract

Older adults demonstrate impairments in navigation that cannot be explained by general cognitive and motor declines. Previous work has shown that older adults may combine sensory cues during navigation differently than younger adults, though this work has largely been done in dark environments where sensory integration may differ from full-cue environments. Here, we test whether aging adults optimally combine cues from two sensory systems critical for navigation: vision (landmarks) and body-based self-motion cues. Participants completed a homing (triangle completion) task using immersive virtual reality to offer the ability to navigate in a well-lit environment including visibility of the ground plane. An optimal model, based on principles of maximum-likelihood estimation, predicts that precision in homing should increase with multisensory information in a manner consistent with each individual sensory cue's perceived reliability (measured by variability). We found that well-aging adults (with normal or corrected-to-normal sensory acuity and active lifestyles) were more variable and less accurate than younger adults during navigation. Both older and younger adults relied more on their visual systems than a maximum likelihood estimation model would suggest. Overall, younger adults' visual weighting matched the model's predictions whereas older adults showed sub-optimal sensory weighting. In addition, high inter-individual differences were seen in both younger and older adults. These results suggest that older adults do not optimally weight each sensory system when combined during navigation, and that older adults may benefit from interventions that help them recalibrate the combination of visual and self-motion cues for navigation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call