Abstract
This study scrutinizes Automated Speech Recognition (ASR) software, a powerful instrument to expedite the translation process, and their unintended bias against Arabic speakers, particularly refugees. We propose that pre-existing biases in ASR training data reflect societal prejudices, leading to orientalist and Islamophobic misrepresentations. We used four ASR tools to transcribe interviews with Arabic-speaking refugee women, employing ideological textual analysis to detect biases. Our findings indicate that ASR algorithms may inadvertently associate Arabic speakers with conflict, war, religion, and narrow down Arab identities to Islam-centric representations. Acknowledging these biases is essential for fostering a more equitable and culturally sensitive technological environment.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have