Abstract

This paper describes a novel approach for class-dependent modeling and its application to automatic text-independent speaker verification. This approach maximizes the conditional mutual information between the model scores and the class identity given some constraints on the scores. It is shown in the paper that maximizing the differential entropy of the scores generated by the classifier or the detector is an equivalent criterion. This approach allows emphasizing different features, in the feature vector used in the detection, for different target speakers. In this paper, we apply this approach to the NIST 2003 1-speaker verification task. Compared to the baseline system, around 10% relative improvement in the minimum detection cost function (DCF) is obtained.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call