The use of AI in weapons systems raises numerous ethical issues. To date, work on weaponized AI has tended to be theoretical and normative in nature, consisting in critical policy analyses and ethical considerations, carried out by philosophers, legal scholars, and political scientists. However, adequately addressing the cultural and social dimensions of technology requires insights and methods from empirical moral and cultural psychology. To do so, this position piece describes the motivations for and sketches the nature of a normative, cultural psychology of weaponized AI. The motivations for this project include the increasingly global, cross-cultural and international, nature of technologies, and counter-intuitive nature of normative thoughts and behaviors. The nature of this project consists in developing standardized measures of AI ethical reasoning and intuitions, coupled with questions exploring the development of norms, administered and validated across different cultural groups and disciplinary contexts. The goal of this piece is not to provide a comprehensive framework for understanding the cultural facets and psychological dimensions of weaponized AI but, rather, to outline in broad terms the contours of an emerging research agenda.
Read full abstract