Values for the fundamental frequency and F1, F2, and F3 were obtained for a corpus of 1248 vocalic nuclei from CVCs (26 phonemically different vocalic nuclei×4 speakers×2 stress rate conditions×3 consonantal contexts×2 repetitions of each token) at 25 equally spaced times within each vocalic nucleus. This corpus included monopthongs /i, i, (eh), æ, a, (inverted vee), (open oh), u, u/, diphthongs /au, ai, ei, ou, (open oh)i/, rhotic vowels /ir, er, ar, or, (hooked backward he)/, and vowels followed by /(el with tilde)/—/i(open oh), i(el with tilde), i(eh), a(el with tilde), (ivertd vee)(el with tilde), (open oh)(el with tilde), u(el with tilded)/. These were spoken in /b—d/, /d—d/, and /g—d/ contexts. A standard back-propagation neural network with one hidden layer was trained to identify which of these 26 vocalic nuclei was spoken. Input data for the neural network was presented as (1) an auditory-perceptual space using (x’y’z’) or (2) other implementations of the fundamental frequency and the first three formants. Preliminary results indicate that the neural network is able to identify these nuclei on the basis of acoustic parameters alone. [Work supported by NIDCD.]