Abstract

Abstract Human beings are deeply social, and both evolutionary traits and cultural constructs encourage cooperation based on trust. Social robots interject themselves in human social settings, and they can be used for deceptive purposes. Robot deception is best understood by examining the effects of deception on the recipient of deceptive actions, and I argue that the long-term consequences of robot deception should receive more attention, as it has the potential to challenge human cultures of trust and degrade the foundations of human cooperation. In conclusion: regulation, ethical conduct by producers, and raised general awareness of the issues described in this article are all required to avoid the unfavourable consequences of a general degradation of trust.

Highlights

  • When social robots are introduced into human environments, they are embedded in a highly complex web of social structures and mechanisms

  • A robot cannot deceive, but it can be the tool of deception, and the humans involved in the production and deployment of social robots are responsible for the consequences social robots have on the culture of trust

  • Since I argue from an approach to agency that considers human beings as responsible for the actions of the social robots we know today, the producers of social robots become the target of my examination

Read more

Summary

Introduction

When social robots are introduced into human environments, they are embedded in a highly complex web of social structures and mechanisms. If social robots disrupt human social mechanisms when embedded in social settings, this is an example of how evolutionary traits and social responses can make us vulnerable, while culture and social norms may aid us in coping with the introduction of new entities in our environments. A robot cannot deceive, but it can be the tool of deception, and the humans involved in the production and deployment of social robots are responsible for the consequences social robots have on the culture of trust. Responsible social robotics requires that we do not create robots that exploit human sociability to the degree that human trust, and sociability, will be reduced as a consequence This could occur through both individual and group mechanisms, as (a) individuals learn and become less cooperative once they are deceived and (b) robot deception may degrade trust through changing human culture and evolutionary pressures. The potential consequences of robot deception for human cooperation and culture are discussed

Understanding deception
The concept of deception
Prosocial deception
Social robots and robot deception
Attribution of deception
Why designers deceive
Typologies of robot deception
Full and partial deception
The problem of anthropomorphism
Discussion
Cultural devolution
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call