The current study aimed to identify how partial auditory (speech) and visual (text) information are resolved for sentence recognition by younger and older adults. Interrupted speech was created by periodically replacing speech with silent intervals at various interruption rates. Likewise, interrupted text was created by periodically replacing printed text with white space. Speech was further processed by low-pass filtering the stimulus spectrum at 2000 Hz. Speech and text stimuli were interrupted at 0.5, 2, 8, and 32 Hz and presented in unimodal and multimodal conditions. Overall, older listeners demonstrated similar performance to younger listeners on the unimodal conditions. However, there was a marked decrease in performance by older listeners during multimodal testing, suggesting increased difficulty in integrating information across modalities. Performance for both groups improved with higher interruption rates for both text and speech in unimodal and multimodal conditions. Listeners were also able to gain benefit from most multimodal presentations, even when the rate of interruption was mismatched between modalities. Supplementing speech with incomplete visual cues can improve sentence intelligibility and compensate for degraded speech in adverse listening conditions. However, the magnitude of multimodal benefit appears mitigated by age-related changes in integrating information across modalities. [Work supported, in part, by NIH/NIDCD.]