Abstract

AbstractIt is well-recognised that cognitive irrationalities can be exploited to influence behaviour. ‘Hypernudging’ was coined by Karen Yeung to describe a powerful version of this phenomenon seen in digital systems that use large quantities of user data and machine learning to guide decision-making in highly personalised ways. Authors have worried about the societal impacts of the use of these capabilities at scale in commercial systems but have only begun to articulate them concretely. In this paper I look to elucidate one concern of this sort by focusing specifically on the employment of these techniques within social media and considering how it threatens our autonomy in forming moral judgments. By moral judgments I mean our judgments of someone’s actions or character as good versus bad. A threat to our autonomy in forming these is of real concern because moral judgments and their associated beliefs provide a critical backdrop for what is deemed acceptable in society, both individually and collectively and therefore what futures are possible and probable.In the first two sections I introduce a psychological model that describes how humans reach moral judgments and the conditions under which it can and cannot be considered autonomous. In the third section I describe how hypernudging within a social media context creates the relevant problematic conditions so as to constitute a threat to our autonomy in forming moral judgments. In the fourth section I explore some practical measures that could be taken to protect moral autonomy. I conclude with some indicative evidence that this threat is not experienced uniformly across all societies, pointing to interesting future areas of research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call