Abstract

With naturalistic dialogue management, a spoken dialogue system behaves as a human would under similar conditions. This paper reports on an experiment to develop naturalistic clarification strategies for noisy speech recognition in the context of spoken dialogue systems. We collected a wizard-of-Oz corpus in which human wizards with access to a rich set of clarification actions made clarification decisions online, based on human-readable versions of system data. The experiment compares an evaluation of calls to a baseline system in a library domain with calls to an enhanced version of the system. The new system has a clarification module based on the wizard data that is a decision tree constructed from three machine-learned models. It replicates the wizards' ability to ground partial understandings of noisy input and to build upon them. The enhanced system has a significantly higher rate of task completion, greater task success and improved efficiency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.