This paper describes a method of speaker normalization using neural networks. The spectral feature vectors from a speaker's (speaker A) spectral space are mapped to the spectral space of a reference speaker (speaker R) using a feedforward multilayer perceptron (MLP) neural network. The MLP is trained using phoentically aligned speech data from the two speakers. The original and mapped feature vectors from speaker A are used as input to a speaker-dependent speech recognizer trained for speaker R. The recognition task is to recognize 13 vowels from the TIMIT database. The improvement in recognition accuracy is then taken as a measure of the effectiveness of the normalization scheme. The effectiveness of the speaker normalization technique is compared for a number of input representations: line spectrum frequencies, weighted cepstral coefficients, and the outputs of the Payton and Patterson-Holdsworth auditory models. [Work performed under the AFOSR Summer Faculty Research Program.]