Historically, research on the contribution of vowels to speech understanding lagged that of consonants. Speech synthesis techniques were then developed that established that the primary acoustic features of vowels are formant frequency, fundamental frequency, speech dynamics and naturalness. For speech perception research, precise control of the features in vowel stimuli was required. Starting in the 1980s, the Klatt formant synthesizer was the first important tool, and by 2000 Kawahara’s STRAIGHT synthesizer generated nearly natural speech. As a baseline for vowel perception, my research sought to determine psychophysical thresholds for F1 and F2 under ideal conditions. This talk reports contributions from my vowel perception studies on three questions. First, how do formant thresholds change with speech context (isolated vowels up to sentences), across age and with hearing impairment? Second, do vowels or consonants contribute more to intelligibility in noise-interrupted sentences? Third, given formant dynamics, how is intelligibility affected as consonant-vowel boundary conditions are manipulated across age and hearing impairment? Significant results include: (1) Vowels carry more information about sentence intelligibility than consonants for both young and older listeners; (2) Even though older listeners’ performance is reduced compared to young, hearing impairment has a greater negative impact than age-related cognitive decline.