Abstract

Cross-lingual generalization issues for less explored languages have been broadly tackled in recent NLP studies. In this study, we apply the philosophy on the problem of translation gender bias, which necessarily involves multilingualism and socio-cultural diversity. Beyond the conventional evaluation criteria for the social bias, we aim to put together various aspects of linguistic viewpoints into the measuring process, to create a template that makes evaluation less tilted to specific types of language pairs. With a manually constructed set of content words and template, we check both the accuracy of gender inference and the fluency of translation, for German, Korean, Portuguese, and Tagalog. Inference accuracy and disparate impact, namely the biasedness factors associated with each other, show that the failure of bias mitigation threatens the delicacy of translation. Furthermore, our analyses on each system and language indicate that the translation fluency and inference accuracy are not necessarily correlated. The results implicitly suggest that the amount of available language resources that boost up the performance might amplify the bias cross-linguistically.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call