Abstract

ABSTRACTBackground/Study Context: Attention can be reflectively oriented to a visual or auditory representation in short-term memory, but it is not clear how aging and hearing acuity affects reflective attention. The purpose of the present study was to examine whether performance in auditory and visual reflective attention tasks varies as a function of participants’ age and hearing status.Methods: Young (19 to 33 years) and older adults with normal or mild to moderate hearing loss (62–90 years) completed a delayed match-to-sample task in which participants were first presented with a memory array of four different digits to hold in memory. Two digits were presented visually (left and right hemifield), and two were presented aurally (left and right ears simultaneously). During the retention interval, participants were presented with a cue (dubbed retro-cue), which could be either uninformative or indicated to the participants to retrospectively orient their attention to either auditory short-term memory (ASTM) or visual short-term memory (VSTM). The cue was followed by another delay, after which a single item was presented (i.e., test probe) for comparison (match or no match) with the items held in ASTM and/or VSTM.Results: Overall, informative retro-cue yielded faster response time than uninformative retro-cue. The retro-cue benefit in response time was comparable for auditory and visual-orienting retro-cue and similar in young and older adults. Regression analyses showed that only the auditory-orienting retro-cue benefit was predicted by hearing status rather than age per se.Conclusion: Both younger and older adults can benefit from visual and auditory-orienting retro-cues, but the auditory-orienting retro-cue benefit decreases with poorer hearing acuity. This finding highlights changes in cognitive processes that come with age even in those with just mild-to-moderate hearing loss, and suggest that older adults’ performance in working memory tasks is sensitive to low level auditory scene analysis (i.e., concurrent sound segregation).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.