Abstract
Language bias, both positive and negative, is a well-documented phenomenon exhibited among human interlocutors. We examine whether this bias is exhibited toward virtual assistants, specifically, Apple's Siri and Google Assistant, with various accents. We conducted three studies with different stimuli and designs to investigate U.S. English speakers’ attitudes toward Google's British, Indian, and American voices and Apple's Irish, Indian, South African, British, Australian, and American voices. Analysis reveals consistently lower fluency ratings for Irish, Indian, and South African voices (compared with American) but no consistent results of bias related to competence, warmth, or willingness to interact. Moreover, participants often misidentified voices’ countries of origin but correctly identified them as artificial. We conclude that this overall lack of bias may be due to two possibilities: lack of humanlikeness of the voices and lack of availability of nonstandardized voices and voices from countries toward which those in the United States typically show bias.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.