In this model, speech perception by adults is characterized as an active, information-seeking process whereby native listeners detect the most reliable acoustic parameters that specify phonetic segments and sequences, using highly over-learned, automatic selective perception routines. In laboratory perceptual tasks, differentiation of native (L1) phonetic contrasts is rapid and robust in suboptimal listening conditions even when the listeners focus on other levels of language structure, or indeed on another task. In contrast, late L2 learners must employ greater attentional resources in order to extract sufficient information to differentiate phonetic contrasts that do not occur in their native language. Phonetic and phonological modes of speech perception are described, which can be tapped in the laboratory by manipulations of stimulus complexity and task demands. These experimental manipulations reveal complex interactions between the linguistic experience of listeners and phonetic similarity relationships between L1 and L2 phonological inventories. Illustrative experimental evidence from studies of vowel perception using perceptual assimilation (cross-language identification), speeded discrimination, discrimination in speech babble, and brain indices of discrimination (MMN) will be presented to provide operational definitions of these concepts. Similarities and differences from other current theories of cross-language and L2 speech perception will be discussed. [Work supported by NIH, NSF.]