Abstract

Although some authors claim that deception requires intention, we argue that there can be deception in social robotics, whether or not it is intended. By focusing on the deceived rather than the deceiver, we propose that false beliefs can be created in the absence of intention. Supporting evidence is found in both human and animal examples. Instead of assuming that deception is wrong only when carried out to benefit the deceiver, we propose that deception in social robotics is wrong when it leads to harmful impacts on individuals and society. The appearance and behaviour of a robot can lead to an overestimation of its functionality or to an illusion of sentience or cognition that can promote misplaced trust and inappropriate uses such as care and companionship of the vulnerable. We consider the allocation of responsibility for harmful deception. Finally, we make the suggestion that harmful impacts could be prevented by legislation, and by the development of an assessment framework for sensitive robot applications.

Highlights

  • According to a number of authors (e.g. Matthias 2015; Sparrow and Sparrow 2006; Sparrow 2002; Wallach and Allen 2009; Sharkey and Sharkey 2011), the development and creation of social robots often involves deception

  • Underlying some of the disagreement about whether or not deception is involved in social robotics are differences of opinion about whether it necessarily involves intention on the part of the deceiver

  • We argue for the need to recognise that there is deception in social robotics

Read more

Summary

Introduction

According to a number of authors (e.g. Matthias 2015; Sparrow and Sparrow 2006; Sparrow 2002; Wallach and Allen 2009; Sharkey and Sharkey 2011), the development and creation of social robots often involves deception. Engineering and Physical Sciences Research Council’s (EPSRC) (Boden et al 2017) ‘principles of robotics’ states that ‘Robots are manufactured artefacts They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be made transparent’. Sorell and Draper (2017) are more sceptical about deception in robotics They set a high bar and assume deception is involved only if the design of a robot misleads people into. An important first step to preventing the harmful consequences of deception in social robotics is to recognise that it can and does occur Once this is recognised, an important step is to consider what harmful effects it could have. We identify potential examples of such effects and follow this with a discussion of who could be held responsible, and whether such consequences could be prevented

Deception without intention
When is deception wrong?
Summary and Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.