Abstract

In late modernity societies, the double meaning of ‘monitoring’ is not a coincidence insofar, it can be a subject or an object attribution, thus suggesting a phenomenological intersection between surveillance studies and technological deployments in mass media. Emerging applications of Artificial Intelligence (AI) tools corroborate an intensified and optimized collection of personal data, either objective ones granted by individuals themselves or subject ones silently taken by algorithmic learning. From an eventual possibility to an invisible probability, Big Data may be used from devising purchase preference profiles to political bias in election periods and to reinforce bigotry against social minorities, especially transphobia. This paper’s objective is to address the use of AI and Big Data as social surveillance systems tools for the establishment of more sophisticated strategies of social control. Before late modernity, disciplinary discursive power was an addressed tool to perform social control in Western societies by institutions such the Roman Catholic Church. Currently, AI technologies are tooled to perform a security-based society regulation, potentially deploying gathered data as threats against social categories that deviate from moral-based norms. Incapable of broadly embracing all cultural and social developments throughout history, such norms refer to social regulation and standardization that turn out to be exclusionary for the existence of distinctive individuals whose identities don’t conform to such moral standards. The issue of ethical AI regulation is therefore grounded in questioning to what extent Western culture values and practices are still consistent in the standardized and global deployment of social and ethical policies addressed to cultures that may hold distinctive cultural perceptions and values. Theoretical reflections on post-modern panoptic frameworks, such as synoptic and banoptic devices, were carried out to assess the impact of emerging surveillance technologies as social control strategies for the reinforced marginalization of categories of exclusion. Instances of recent technology-based violence discriminations, such as misogyny, religious intolerance, racism, xenophobia, and transphobia, are provided and seen through the lens of current AI development and transphobia. The efforts of global, universal, and unilateral influence of Western culture’s values on AI ethical regulation is counteracted with a reflection on decentralized bottom-up approaches to culture by means of applied ethnographic research to bring the potential of local culture into AI policy making. It is expected to corroborate future research on local-based ethical AI approaches designed within a specific culture’s values to mitigate and avoid social vulnerability and violence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call