BackgroundPedagogical agents are computerized talking heads or embodied animated avatars that help students learn by performing actions and holding conversations with the students in natural language. Dialogues occur between a tutor agent and the student in the case of AutoTutor and other intelligent tutoring systems with natural-language conversation. The agents are adaptive to the students’ actions, verbal contributions, and, in some systems, their emotions (such as boredom, confusion, and frustration).Focus of StudyThis paper explores several designs of trialogues (two agents interacting with a human student) that have been productively implemented for particular students, subject matters, and depths of learning. The two agents take on different roles, but often serve as peers and tutors. There are different trialogue designs that address different pedagogical goals for different classes of students. For example, students can (a) observe vicariously two agents interacting, (b) converse with a tutor agent while a peer agent periodically chimes in, or (c) teach a peer agent while a tutor rescues a problematic interaction. In addition, agents can argue with each other over issues and ask what the human student thinks about the argument.Research DesignTrialogues have been developed for systematic experimental investigations in several studies that measure student impressions, learning gains from pretest to post-test on objective tests, and both cognitive and affective states during learning. The studies compare conditions with different pedagogical principles underlying the trialogues in order to assess the impact of these principles on student impressions, learning, emotions, and other psychological measures. Discourse analyses are performed on the language and actions in the log files in order to assess their impacts on psychological measures.RecommendationsTests of these agent-based systems have shown improvements in learning gains and systematic influences on student emotions. In the future, researchers need to conduct more research to empirically evaluate the psychological impact of different trialogue designs on psychological measures. These trialogue designs range from scripted interactions between agents being observed by the student, to the student helping a fellow peer agent, to the student resolving an argument between two agents. The central question is whether the learning experiences and outcomes show improvement over typical human-computer dialogues (i.e., one human and one tutor agent) and conventional pedagogical interventions.
Read full abstract