The objective of this paper is to examine how artificial intelligence systems (AI) can reproduce phenomena of social discrimination and to develop an ethical strategy for preventing such occurrences. A substantial body of scholarship has demonstrated how AI has the potential to erode the rights of women and LGBT+ individuals, as it is capable of amplifying forms of discrimination that are already pervasive in society. This paper examines the principal approaches that have been put forth to contrast the emergence of biases in AI systems, namely causal, counterfactual reasoning, and constructivist methodology. This analysis demonstrates the necessity of considering the sociopolitical context in which AI systems are developed when evaluating their ethical implications. To investigate this conjunction, we apply the theory of gender performativity as theorized by Judith Butler and Karen Barad. This illustrates how AI functions within the social fabric, manifesting patriarchal configurations of gender through an analysis of the notorious case of the COMPAS system for predictive justice. In conclusion, we demonstrate how reframing of gender performativity theory, when applied to AI ethics, permits us to consider the social context within which these technologies will operate. This approach enables an expansion of the interpretation of the concept of fairness, thereby reflecting the complex dynamics of gender production. In the context of AI ethics, the concept of "fairness" pertains to the capacity of an algorithm to generate results dealing with sensitive categories, such as gender, ethnicity, religion, sexual orientation, and disability, in a manner that does not engender forms of discrimination and prejudice. The gender dimension needs to be reconsidered not as an individual feature but as a performative process. Moreover, it enables the identification of pivotal issues that must be addressed during the development, testing, and evaluation phases of AI systems.
Read full abstract