Abstract

Abstract This article examines the epistemic consequences of unfair technologies used in digital humanities (DH). We connect bias analysis informed by the field of algorithmic fairness with perspectives on knowledge production in DH. We examine the fairness of Danish Named Entity Recognition tools through an innovative experimental method involving data augmentation and evaluate the performance disparities based on two metrics of algorithmic fairness: calibration within groups; and balance for the positive class. Our results show that only two of the ten tested models comply with the fairness criteria. From an intersectional perspective, we shed light on how unequal performance across groups can lead to the exclusion and marginalization of certain social groups, leading to voices and experiences being disregarded and silenced. We propose incorporating algorithmic fairness in the selection of tools in DH to help alleviate the risk of perpetuating silence and move towards fairer and more inclusive research.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.