Abstract

We present a system that automatically generates deictic gestures for animated pedagogical agents (APAs). The system takes audio and text as input, which define what the APA has to say, and generates animated gestures based on a set of rules. The automatically generated gestures point to the exact locations of elements on a whiteboard nearby the APA, which are calculated by searching for keywords mentioned in the speech. We conducted a study with 100 subjects, in which we compared lecture videos containing gestures automatically-scripted by the system to videos of the same lecture containing manually-scripted gestures. The study results show that the manually-scripted and automatically-scripted lectures had comparable number of gestures, and that the gestures were timed equally well.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call