Stilp and colleagues (Proc. Natl. Acad. Sci. [2010]; JASA [2011]; PLoS One [2012]) demonstrated that auditory perception rapidly and automatically exploits redundancy among acoustic attributes in novel complex sounds. When stimuli exhibited robust covariance between acoustic dimensions (attack/decay, spectral shape), discrimination of sound pairs violating this pattern was initially poorer than that for sound pairs respecting the pattern. While results support efficient coding of statistical structure in the environment, evidence of its contribution to speech perception remains indirect. The present effort examines perceptual organization adhering to statistical regularities in speech sounds. Vowel stimuli (/ɑ/, “ah”) were synthesized to reflect natural correlation between formant frequencies across talkers; as vocal tract length decreases (from men to women to children), formant center frequencies increase (here F 1-F 2 varied; others held constant). Listeners discriminated vowel pairs that either obeyed this correlation (16 pairs) or violated it (1 pair) in randomized AXB trials without feedback. Performance replicated results with nonspeech sounds. Vowels that violated natural redundancy between formant frequencies were discriminated poorer than vowels that obeyed this pattern. Results encourage an efficient coding approach to speech perception, as redundancy among stimulus attributes is exploited to facilitate perceptual organization and discrimination. [Supported by NIDCD.]