Abstract

Interactions between an intelligent software agent (ISA) and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality (and deploying concepts from artificial intelligence, behavioural economics, control theory, and game theory), we frame these interactions as instances of an ISA whose reward depends on actions performed by the user. Such agents benefit by steering the user’s behaviour towards outcomes that maximise the ISA’s utility, which may or may not be aligned with that of the user. Video games, news recommendation aggregation engines, and fitness trackers can all be instances of this general case. Our analysis facilitates distinguishing various subcases of interaction (i.e. deception, coercion, trading, and nudging), as well as second-order effects that might include the possibility for adaptive interfaces to induce behavioural addiction, and/or change in user belief. We present these types of interaction within a conceptual framework, and review current examples of persuasive technologies and the issues that arise from their use. We argue that the nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour. We discuss some of the ethical, social and legal implications of this technology and argue that it can sometimes exploit and reinforce weaknesses in human beings.

Highlights

  • The software running various aspects of the world wide web plays an important role in shaping human behaviour in the growing list of essential and recreational activities migrated online

  • Using language borrowed from control and game theories, we present a simplified model of autonomous behaviour, based on the classic work on “bounded rationality” (Simon 1956), in order to organise a variety of everyday interactions between Intelligent Software Agent (ISA) and human users into a cohesive framework, which elucidates the differences and similarities of the subtypes of interactions

  • The idea behind trading is that the ISA has some knowledge of the utility function of the user, possibly because the user has declared it explicitly, the ISA has inferred it by observation, or the designers have hardwired their assumptions about it into the ISA

Read more

Summary

Introduction

The software running various aspects of the world wide web plays an important role in shaping human behaviour in the growing list of essential and recreational activities migrated online. A particular advertisement is selected by an ISA autonomously running an auction that determines which advertiser is willing to pay the most for their advertisement to be shown on this occasion This list of possible bids is simultaneously weighted against a single “quality score”, which is obtained by estimating the probability that a given user (described by thousands of possible signals) would click on each of the possible advertisements, and combining it with the potential price of each possible click— this can be idealised as the ISA computing an expected utility. Our discussion is neutral about this, though not about whether ISA’s are agents in the sense of making decisions in accordance with preferences based on a representation of the environment (as in the definition above and detailed ) As such their actions and their effects on human beings are certainly morally significant.

Autonomous Agents and Bounded Rationality
Intelligent Software Agents and Human Users
Coercion and Deception
Coercion
Deception
Persuasion
Trading
Nudging
Second‐Order Effects
Persuasion of Human Users by Intelligent Software Agents
Feedback
Changes to Beliefs
Changes to Utilities
Discussion
Value Alignment
Autonomy and Nudging
Moral Agency
Findings
Social Impact
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call