In this paper, we perform pilot experiments to evaluate the feasibility of a model to predict human recognition of speech sounds in the presence of noise at different speaking rates. CVC stimuli comprising a phonetically balanced set of 13 consonants and 3 vowels (/i/, /a/, /u/) were recorded in a sound proof booth by two talkers at two different speaking rates (fast and slow). Noisy stimuli were generated by adding babble noise at different levels to the quiet recordings. These stimuli were used to conduct perceptual experiments in which listeners were asked to listen and repeat back the CVC phrases presented in babble noise under 3 SNR conditions and both speaking rates. The data were transcribed by two trained linguists. Consonant confusion matrices were generated from these data and were analyzed by noise level, speaker, center vowel, and speaking rate. With the exception of /CuC/ stimuli, speaking rate had the most pronounced effect on perception with slow speech being more intelligible than fast speech in noise. /CaC/ stimuli were, on average, more robust than other stimuli in all conditions and one talker was significantly more intelligible than the other. Adetailed analysis of the results will be presented. [Work supported in part by the NSF.]