We investigated speech and nonspeech auditory processing of temporal and spectral cues in people who do and do not stutter. We also asked whether self-reported stuttering severity was predicted by performance on the auditory processing measures. People who stutter (n = 23) and people who do not stutter (n = 28) completed a series of four auditory processing tasks online. These tasks consisted of speech and nonspeech stimuli differing in spectral or temporal cues. We then used independent-samples t-tests to assess differences in phonetic categorization slopes between groups and linear mixed-effects models to test differences in nonspeech auditory processing between stuttering and nonstuttering groups, and stuttering severity as a function of performance on all auditory processing tasks. We found statistically significant differences between people who do and do not stutter in phonetic categorization of a continuum differing in a temporal cue and in discrimination of nonspeech stimuli differing in a spectral cue. A significant proportion of variance in self-reported stuttering severity was predicted by performance on the auditory processing measures. Taken together, these results suggest that people who stutter process both speech and nonspeech auditory information differently than people who do not stutter and may point to subtle differences in auditory processing that could contribute to stuttering. We also note that these patterns could be the consequence of listening to one's own speech, rather than the cause of production differences.