Abstract

The current study investigates if a virtual human’s voice can impact the user’s trust in interacting with the virtual human in a learning setting. It was hypothesized that trust is a malleable factor impacted by the quality of the virtual human’s voice. A randomized alternative treatments design with a pretest placed participants in either a low-quality Text-to-Speech (TTS) engine female voice (Microsoft speech engine), a high-quality TTS engine female voice (Neospeech voice engine), or a human voice (native female English speaker) condition. All three treatments were paired with the same female virtual human. Assessments for the study included a self-report pretest on knowledge of meteorology, which occurred before viewing the instructional video, and a measure of system trust. The current study found that voice type impacts a user’s trust ratings, with the human voice resulting in higher ratings compared to the two synthetic voices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call