Discrimination of an acoustic variable (various durations of silence) was measured when, as part of a synthetic speech pattern, that variable cued a phonemic distinction and when the same variable appeared in a non-speech context. in the speech case the durations of silence separated the two syllables of a synthesized word, causing it to be heard as rabid when the intersyllabic silence was of short duration and as rapid when it was long. with acoustic differences equal, discrimination proved to be more acute across the /b,p/ phoneme boundary than within either phoneme category. This effect approximated what one would expect on the extreme assumption that the listeners could hear these sounds only as phonemes, and could discriminate no other differences among them; however, the approximation was not so close as for certain other consonant distinctions. In the case of the non-speech sounds the same durations of silence separated two bursts of noise tailored to match the onset, duration, and offset characteristics of the speech signals. There was, with these stimuli, no appreciable increase in discrimination in the region corresponding to the location of the phoneme boundary. If we assume that the functions obtained with the non-speech patterns represent the basic discriminability of the durations of silence, free of the influence of linguistic training, we may conclude that the discrimination peaks in the speech functions reflect an effect of learning on perception. It was found, too, that discrimination of the non-speech patterns was, in general, poorer than that of the speech. From this we conclude that the effect of learning must have been to increase discrimination across the phoneme boundary; there was no evidence of a reduction in discrimination within the phoneme category.