Abstract

A comparison has been made between the performance of a computer procedure of speaker verification [Doddington, J. Acoust. Soc. Amer. 49, 139(A) (1971)] and listener performance in the same task. In the evaluation of the computer method, 32 “casual” impostors were pitted against eight “true” speakers. A “casual” impostor is one who makes no attempt to mimic the “true” speaker but simply repeats the same test sentences in his own natural voice. After an a posteriori adjustment of the acceptance-rejection criterion to equalize errors of false acceptance and false rejection, an average error rate of 1.5% is obtained. The same test utterances used by Doddington were used as stimuli in a subjective speaker verification experiment in which 10 listeners participated. Each stimulus presentation was a paired comparison consisting of a challenge and a reference utterance. The reference was one of the “true” speaker utterances while the challenge was either an utterance from the same “true” speaker or an “impostor” utterance with equal likelihood. Listeners were required to indicate whether they thought the utterances were by the same or different speakers. The over-all average error rates were approximately 4.2% for both false acceptance and false rejection. The best false acceptance rate by an individual listener was 1.6%, while the best individual false rejection rate was 0.5%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.