Abstract

Controlled user testing of the dialogue component of spoken language dialogue systems (SLDSs) has a natural focus on the detection, analysis and repair of dialogue design problems. Not only dialogue designers and their systems commit errors, however. Users do so as well. Improvement of dialogue interaction is not only a matter of reducing the number and severity of dialogue design problems but also of preventing the occurrence of avoidable user errors. Based on a controlled user test of the dialogue component of an implemented SLDS, the paper takes a systematic look at the dialogue errors made by users in the test corpus. A typology of user errors in spoken human-machine dialogue is presented and discussed, and potentially important dialogue design advice derived from the fact that the notion of a ‘user error’ turns out to be one that must be handled with care.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.