Abstract
It is widely known that the quality of confidence measure is critical for speech applications. In this paper, we present our recent work on improving word confidence scores by calibrating them using a small set of calibration data when only the recognized word sequence and associated raw confidence scores are made available. The core of our technique is the maximum entropy model with distribution constraints which naturally and effectively make use of the word distribution, the raw confidence-score distribution, and the context information. We demonstrate the effectiveness of our approach by showing that it can achieve relative 38% mean square error (MSE), 39% negative normalized likelihood (NNLL), and 23% equal error rate (EER) reduction on a voice mail transcription data set and relative 35% MSE, 45% NNLL, and 35% EER reduction on a command and control data set.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.