This paper presents results from an investigation of how stress and emotion affect speech characteristics with specific application to improving automatic speech recognition. Past studies have been limited in scope, often using only one or two subjects and analyzing only one or two parameters (typically involving pitch). A comprehensive speech under stress database has been established at Georgia Tech for the purposes of stress research. The database is partitioned into five domains, encompassing a wide variety of stresses that include: various talking styles (slow, fast, soft, loud, angry, clear, question, in noise), single and dual tracking workload stress inducing tasks, emotional speech from psychiatric analysis session, and subject motion-fear tasks. A total of 32 speakers was employed to generate in excess of 16 000 utterances. The database evaluation was partitioned into three areas. Analysis was first performed on (i) speech with simulated stress and (ii) speech from stress inducing workload tasks or speech in noise. Statistically significant parameters were established, and an equivalent analysis was carried out over (iii) speech produced under actual stress and emotion. This scheme was chosen since simulated conditions allowed for careful control of vocabulary, task requirements, and background noise characteristics. Evaluation over actual stress or emotional conditions was used to verify results established under simulated conditions. Variables employed in the evaluation include: pitch (mean, variance, higher moments, point process characterization), glottal waveform characteristics, glottal source spectrum (spectral tilt, energy concentration), duration, intensity, formant locations and bandwidths, vocal-tract shaping based on line spectral pair coefficients, and the variability of each of these. Resulting categorizations will be presented, as well as the statistical significance of each set of measurements. [Research supported by U.S. Army Human Engineering Labs.]