Abstract

One significant problem for spoken language systems is how to cope with users' OOD (out-of-domain) utterances which cannot be handled by the back-end system. In this paper, we propose a novel OOD detection framework, which makes use of classification confidence scores of multiple topics and trains a linear discriminant in-domain verifier using gradient probabilistic descent (GPD). Training is based on deleted interpolation of the in-domain data, and thus does not require actual OOD data, providing high portability. Three topic classification schemes of word N-gram models, latent semantic analysis (LSA), and support vector machines (SVM) are evaluated, and SVM is shown to have the greatest discriminative ability. In an OOD detection task, the proposed approach achieves an absolute reduction in equal error rate (EER) of 6.5% compared to a baseline method based on a simple combination of multiple-topic classifications. Furthermore, comparison with a system trained using OOD data demonstrates that the proposed training scheme realizes comparable performance while requiring no knowledge of the OOD data set.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.