Abstract

AbstractChatbots are increasingly used as digital self-coaching tools in various fields. One area of application is in higher education. Here, coaching chatbots can accompany and support students’ self-reflection processes. For the present study, a coaching chatbot on the topic of exam anxiety was developed within the Conversational AI framework Rasa, which is intended to enable low-threshold engagement with students’ exam anxiety through solution- and resource-oriented questions. In the study, the disclosure behavior of a chatbot (self-disclosure, information disclosure, no disclosure) was varied in order to draw conclusions about acceptance and working alliance in chatbot coaching. Studies show that self-disclosure and/or information disclosure of a chatbot can have a positive effect on working alliance-like constructs like rapport. The online experiment included chatbot coaching and a subsequent survey. 201 subjects participated in the three experimental conditions. Technical functionality, acceptance, and working alliance were moderate to good in all three experimental groups. The subjects were willing to engage in interaction with the chatbot. However, no statistically significant differences can be demonstrated between the three experimental groups. Self-disclosure, information disclosure, or no disclosure do not seem to have an impact on the acceptance and working alliance between user and chatbot in the use case of student coaching on exam anxiety. The results provide an important complement to previous studies on (self-)disclosure and lead to further research questions about effectiveness factors in chatbot coaching. One focus will be on exploring context. The conversation and contextual design are subject to further investigations and improvements.KeywordsAI-based coachingConversational AIWorking allianceAcceptanceSelf-disclosure

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call