Abstract

Summary form only given. The need for accurate and flexible evaluation frameworks for spoken and multimodal dialogue systems has become crucial. In the early design phases of spoken dialogue systems, it is worthwhile evaluating the user's easiness in interacting with different dialogue strategies, rather than the efficiency of the dialogue system in providing the required information. The success of a task-oriented dialogue system greatly depends on the ability of providing a meaningful match between user's expectations and system capabilities, and a good trade-off improves the user's effectiveness. The evaluation methodology requires three steps. The first step has the goal of individuating the different tokens and relations that constitute the user mental model of the task. Once tokens and relations are considered for designing one or more dialogue strategies, the evaluation enters its second step which is constituted by a between-group experiment. Each strategy is tried by a representative set of experimental subjects. The third step includes measuring user effectiveness in providing the spoken dialogue system with the information it needs to solve the task. The paper argues that the application of the three-steps evaluation method may increase our understanding of the user mental model of a task during early stages of development of a spoken language agent. Experimental data supporting this claim are reported.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call