Speech scientists have held out the hope that discoveries in auditory physiology and psychophysics could be used to guide the design of mechanical speech processing devices. For example, from the speech scientist's point of view, an ideal peripheral auditory system for man might be one that would present to the central nervous system a representation of speech waveforms in which acoustic cues to phonetic contrasts were highlighted, while irrelevant acoustic details associated with changes in speech level, cross‐speaker differences, and speech transmission channel characteristics were minimized. Recent studies of the auditory periphery suggest that the desired (spectral) pattern invariance with change in SPL is not met in the patterns of average firing rates on primary auditory neurons, but that level independence is more nearly preserved in the pattern of interspike intervals [Sachs and Young, J. Acoust. Soc. Am. 68, 858 (1980)]. However, interpretation of interspike interval data requires an autocorrelation‐like analysis that is presumably performed more centrally. Thus peripheral frequency selectivity is only one of several factors constraining the input representation of speech sounds, and builders of speech processing systems are perhaps on safer grounds at this time if they rely on psychophysical data (e.g., critical‐band concepts) to guide their designs. One promising new psychophysical task is a perceived phonetic distance paradigm. Use of this technique to design a perceptually motivated distance metric for phonetic recognition will be discussed. [Work supported in part by an NSF grant.]