Exploring Laypeople’s Moral Considerations Towards Social Robots

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Exploring Laypeople’s Moral Considerations Towards Social Robots

Similar Papers
  • Book Chapter
  • Cite Count Icon 3
  • 10.11647/obp.0366.03
3. Social Robots and Society
  • Sep 5, 2023
  • Sven Nyholm + 9 more

Advancements in artificial intelligence and (social) robotics raise pertinent questions as to how these technologies may help shape the society of the future. The main aim of the chapter is to consider the social and conceptual disruptions that might be associated with social robots, and humanoid social robots in particular. This chapter starts by comparing the concepts of robots and artificial intelligence, and briefly explores the origins of these expressions. It then explains what a social robot is, as well as what humanoid robots are. A key notion in this context is the idea of anthropomorphism: the human tendency to attribute human qualities, not only to our fellow human beings, but also to parts of nature and to technologies. This tendency to anthropomorphize technologies by responding to and interacting with them as if they have human qualities is one of the reasons why social robots – in particular social robots designed to look and behave like human beings – can be socially disruptive. As is explained in the chapter, while some ethics researchers think that anthropomorphization is a mistake that can lead to various forms of deception, others – including both ethics researchers and social roboticists – believe it can be useful or fitting to treat robots in anthropomorphizing ways. The chapter explores that disagreement by, among other things, considering recent philosophical debates about whether social robots can be moral patients; that is, whether it can make sense to treat them with moral consideration. Where one stands on this issue will depend on one’s views about whether social robots can have, imitate, or represent morally relevant properties, or on how people relate to social robots in their interactions with them. Lastly, the chapter urges that the ethics of social robots should explore intercultural perspectives, and highlights some recent research on ubuntu ethics and social robots.

  • Single Book
  • Cite Count Icon 3
  • 10.5771/9781538185049
Social Robots
  • Jan 1, 2023
  • Paula Sweeney

Social robots are an increasingly integral part of society, already appearing as customer service assistants, care-home helpers, teaching assistants and personal companions. This book argues that the wider inclusion of social robots in our society is having a revolutionary impact on some of our key intuitions regarding ethics, metaphysics and epistemology and, as such, will put pressure on many of our best theories. Social robots elicit an emotional and social response in humans that some have taken to be evidence that robots deserve moral consideration. Others have argued that, as robots are only machines, we should avoid designing robots that encourage emotional engagement. The fictional dualism model provides a new way for us to view social robots and a new route for our continued relationship with them. When we engage with a social robot, we create a fictional overlay that has wants, needs and desires. Our emotional attachment to social robots is a natural continuation of our relationship to fiction: a life-enhancing and important connection, but not one that prompts moral consideration for the fictional entity. In this book, Paula Sweeney shows how the fictional dualism model of social robots differs from other popular models. In addition to providing a distinctive and ethically appropriate framework for emotional engagement without moral consideration, the model provides conditions for trusting social robots and, uniquely, allows us to individuate social robots as distinct persons, even in contexts in which they share a collective mind.

  • Research Article
  • Cite Count Icon 54
  • 10.1016/j.compedu.2021.104317
Moral considerations on social robots in education: A multi-stakeholder perspective
  • Aug 30, 2021
  • Computers & Education
  • Matthijs Smakman + 2 more

Social robots are a promising new technology for primary education. However, they also introduce practical and moral challenges and there is an increasing demand for guidelines for a responsible, safe introduction of social robots. In this study, we identified and compared the moral considerations associated with the introduction of social robots in primary education from the viewpoint of direct and indirect stakeholders by conducting focus group sessions (N = 118). In total, we identified and compared stakeholder considerations related to 17 moral values. Overall, each of the stakeholder groups considered social robots a potentially valuable tool for education. Many similarities and only few conflicting views across the various stakeholder groups were found. Particularly among the teachers, parents, and policymakers, there were many similarities on the issues reported and their considerations were often aligned. These insights into the moral considerations of the various stakeholders involved, provide a solid base to develop guidelines for implementing social robots in education as requested by scholars and society.

  • Conference Article
  • Cite Count Icon 10
  • 10.1109/ro-man47096.2020.9223582
The Impact of Social Robots in Education: Moral Considerations of Dutch Educational Policymakers
  • Aug 1, 2020
  • M Smakman + 2 more

Social robots are increasingly studied and applied in the educational domain. Although they hold great potential for education, they also bring new moral challenges. In this study, we explored the moral considerations related to social robots from the perspective of Dutch educational policymakers by first identifying opportunities and concerns and then mapping them onto (moral) values from the literature. To explore their moral considerations, we conducted focus group sessions with Dutch Educational Policymakers (N = 20). Considerations varied from the potential to lower the workload of teachers, to concerns related to the increased influence of commercial enterprises on the educational system. In total, the considerations of the policymakers related to 15 theoretical values. We identified the moral considerations of educational policymakers to gain a better understanding of a governmental attitude towards the use of social robots. This helps to create the necessary moral guidelines towards a responsible implementation of social robots in education.

  • Research Article
  • Cite Count Icon 8
  • 10.1016/j.chb.2023.107821
New dyads? The effect of social robots’ anthropomorphization on empathy towards human beings
  • May 31, 2023
  • Computers in Human Behavior
  • Federica Spaccatini + 2 more

Research on Human-Technology Interactions revealed that, under certain conditions, people instinctively interact with social robots in ways comparable to Human–Human Interactions. Indeed, people apply social perception schemas and attribute a mind to social robots, especially when they present anthropomorphic characteristics. Furthermore, under certain conditions, anthropomorphic social robots are awarded with moral consideration and participate to moral dyads. Thus, anthropomorphism facilitates social robots integration in people's lives. However, what is still unknown is whether adopting social schemas with social robots, in turn, affects how individuals perceive and interact with other people. To fill this gap, we experimentally investigated whether the type of mind attributed to anthropomorphic social robot, then, complementary influences the empathy towards a person in trouble. Participants (n = 269) interacted (vs. did not interact) through a chatbot with a highly (vs. lowly) anthropomorphic social robot, evaluated it on mind dimensions and, finally, expressed their empathy towards a person. Results demonstrated that anthropomorphism fosters the attribution of agency (anthropomorphic appearance and interaction through chatbot) and experience (anthropomorphic appearance only), which, in turn, significantly, but in opposite directions, affected empathy towards the social target. Implications and future research directions are outlined.

  • Research Article
  • Cite Count Icon 42
  • 10.1007/s13347-019-0341-y
Sympathy for Dolores: Moral Consideration for Robots Based on Virtue and Recognition
  • Feb 22, 2019
  • Philosophy & Technology
  • Massimiliano L Cappuccio + 2 more

This paper motivates the idea that social robots should be credited as moral patients, building on an argumentative approach that combines virtue ethics and social recognition theory. Our proposal answers the call for a nuanced ethical evaluation of human-robot interaction that does justice to both the robustness of the social responses solicited in humans by robots and the fact that robots are designed to be used as instruments. On the one hand, we acknowledge that the instrumental nature of robots and their unsophisticated social capabilities prevent any attribution of rights to robots, which are devoid of intrinsic moral dignity and personal status. On the other hand, we argue that another form of moral consideration—not based on rights attribution—can and must be granted to robots. The reason is that relationships with robots offer to the human agents important opportunities to cultivate both vices and virtues, like social interaction with other human beings. Our argument appeals to social recognition to explain why social robots, unlike other technological artifacts, are capable of establishing with their human users quasi-social relationships as pseudo-persons. This recognition dynamic justifies seeing robots as worthy of moral consideration from a virtue ethical standpoint as it predicts the pre-reflective formation of persistent affective dispositions and behavioral habits that are capable of corrupting the human user’s character. We conclude by drawing attention to a potential paradox drawn forth by our analysis and by examining the main conceptual conundrums that our approach has to face.

  • Research Article
  • Cite Count Icon 45
  • 10.1007/s00146-020-01002-1
On the moral status of social robots: considering the consciousness criterion
  • Jun 15, 2020
  • AI & SOCIETY
  • Kestutis Mosakas

While philosophers have been debating for decades on whether different entities—including severely disabled human beings, embryos, animals, objects of nature, and even works of art—can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence (AI). One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with human beings. In recent years, some approaches to moral consideration have been proposed that would include social robots as proper objects of moral concern, even though it seems unlikely that these machines are conscious beings. In the present paper, I argue against these approaches by advocating the “consciousness criterion,” which proposes phenomenal consciousness as a necessary condition for accrediting moral status. First, I explain why it is generally supposed that consciousness underlies the morally relevant properties (such as sentience) and then, I respond to some of the common objections against this view. Then, I examine three inclusive alternative approaches to moral consideration that could accommodate social robots and point out why they are ultimately implausible. Finally, I conclude that social robots should not be regarded as proper objects of moral concern unless and until they become capable of having conscious experience. While that does not entail that they should be excluded from our moral reasoning and decision-making altogether, it does suggest that humans do not owe direct moral duties to them.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 2
  • 10.1007/s12369-023-01009-w
Why Social Robots Need Self-Preservation to Be Objects of Moral Consideration
  • Jun 13, 2023
  • International Journal of Social Robotics
  • Mohamed Hassan

While there is a wide agreement today among researchers that robots lack the capacity to be conscious or sentient, two possible dangers are important to address today. Firstly, one could claim that a robot is sentient in the near future or even today in some way that we are not able to understand or verify. Secondly, if we decide to err on the side of caution and grant them legal protection anyway, we could be infringing on personal and intellectual freedom by restricting particular uses of robots or the research and development of said robots. This brings the question that is central to this paper: where can we draw the line? Put in another way, how can we know if SRs are objects of moral consideration (OMCs: such as dogs, bees, or trees) or an object of human desire (OHDs: such as toasters or toys)? This paper presents the condition of self-preservation as a necessary and sufficient condition to draw the line between OMCs and OHDs.

  • Research Article
  • Cite Count Icon 15
  • 10.1007/s10676-021-09589-9
A fictional dualism model of social robots
  • Apr 8, 2021
  • Ethics and Information Technology
  • Paula Sweeney

In this paper I propose a Fictional Dualism model of social robots. The model helps us to understand the human emotional reaction to social robots and also acts as a guide for us in determining the significance of that emotional reaction, enabling us to better define the moral and legislative rights of social robots within our society. I propose a distinctive position that allows us to accept that robots are tools, that our emotional reaction to them can be important to their usefulness, and that this emotional reaction is not a direct indicator that robots deserve either moral consideration or rights. The positive framework of Fictional Dualism provides us with an understanding of what social robots are and with a plausible basis for our relationships with them as we bring them further into society.

  • Research Article
  • Cite Count Icon 277
  • 10.1007/s10676-010-9235-5
Robot rights? Towards a social-relational justification of moral consideration
  • Jun 27, 2010
  • Ethics and Information Technology
  • Mark Coeckelbergh

Should we grant rights to artificially intelligent robots? Most current and near-future robots do not meet the hard criteria set by deontological and utilitarian theory. Virtue ethics can avoid this problem with its indirect approach. However, both direct and indirect arguments for moral consideration rest on ontological features of entities, an approach which incurs several problems. In response to these difficulties, this paper taps into a different conceptual resource in order to be able to grant some degree of moral consideration to some intelligent social robots: it sketches a novel argument for moral consideration based on social relations. It is shown that to further develop this argument we need to revise our existing ontological and social-political frameworks. It is suggested that we need a social ecology, which may be developed by engaging with Western ecology and Eastern worldviews. Although this relational turn raises many difficult issues and requires more work, this paper provides a rough outline of an alternative approach to moral consideration that can assist us in shaping our relations to intelligent robots and, by extension, to all artificial and biological entities that appear to us as more than instruments for our human purposes.

  • PDF Download Icon
  • Supplementary Content
  • Cite Count Icon 9
  • 10.3389/frobt.2021.791527
Empathizing and Sympathizing With Robots: Implications for Moral Standing
  • Jan 3, 2022
  • Frontiers in Robotics and AI
  • Oliver Santiago Quick

This paper discusses the ethical nature of empathetic and sympathetic engagement with social robots, ultimately arguing that an entity which is engaged with through empathy or sympathy is engaged with as an “experiencing Other” and is as such due at least “minimal” moral consideration. Additionally, it is argued that extant HRI research often fails to recognize the complexity of empathy and sympathy, such that the two concepts are frequently treated as synonymous. The arguments for these claims occur in two steps. First, it is argued that there are at least three understandings of empathy, such that particular care is needed when researching “empathy” in human-robot interactions. The phenomenological approach to empathy—perhaps the least utilized of the three discussed understandings—is the approach with the most direct implications for moral standing. Furthermore, because “empathy” and “sympathy” are often conflated, a novel account of sympathy which makes clear the difference between the two concepts is presented, and the importance for these distinctions is argued for. In the second step, the phenomenological insights presented before regarding the nature of empathy are applied to the problem of robot moral standing to argue that empathetic and sympathetic engagement with an entity constitute an ethical engagement with it. The paper concludes by offering several potential research questions that result from the phenomenological analysis of empathy in human-robot interactions.

  • Research Article
  • Cite Count Icon 1
  • 10.25779/g8a3-f992
“How could you even ask that?”: Moral considerability, uncertainty and vulnerability in social robotics
  • Nov 19, 2020
  • Alexis M Elder

“How could you even ask that?”: Moral considerability, uncertainty and vulnerability in social robotics

  • Book Chapter
  • 10.5771/9781538185049-27
Chapter 3: Social Robots and Moral Consideration
  • Jan 1, 2023
  • Paula Sweeney

Chapter 3: Social Robots and Moral Consideration

  • Research Article
  • 10.1055/a-2452-8558
Ethical challenges of telepsychiatry
  • Nov 13, 2024
  • Fortschritte der Neurologie-Psychiatrie
  • Julien Willms + 4 more

Telepsychiatry is shifting the focus of psychiatry and psychotherapy from personal interaction and relationship building to communication with technological mediators in the form of telecommunication, virtual reality (VR), social robots or artificial intelligence (AI). This article discusses the opportunities and risks of new technologies in psychiatric treatment, taking into account the principles of medical ethics. Telepsychiatric treatments can promote self-determination in the home environment as opposed to institutionalisation but carry the risk of uncontrolled data sharing. They harbour risks of potential harm such as social isolation, negative effects on the doctor-patient relationship and long-term changes in patient behaviour through, for example, VR. In terms of justice, demographic and psychopathological factors could result in unequal access to telepsychiatry, with questionable sustainability effects. The anonymity of telepsychiatry threatens alienation, while moral considerations cannot be replaced by AI for the time being. It is obligatory to carefully weigh up the benefits and risks of telepsychiatry, but especially in rural areas they could optimise care.Precisely because of ethical concerns, further intensive research is necessary to weigh up the risks and benefits. The development of telepsychiatric systems requires transdisciplinary co-operation. At a societal level, the significance of technologies as a supplement or replacement for the doctor-patient relationship needs to be discussed.

  • Research Article
  • Cite Count Icon 3
  • 10.4018/ijt.2017070105
Animal Rights and Robot Ethics
  • Jul 1, 2017
  • International Journal of Technoethics
  • Thilo Hagendorff

This paper investigates challenges which anthropocentric and pathocentric ethics have to face when confronted with moral considerations about non-human animals, especially so-called disenhanced animals, and a new class of technological artifacts, namely social robots. Referring to the case of animal welfare, robot ethics emerges as a new discipline that has not yet reflected on the ideological biases that commonly underlie moral judgments toward animals and find expression in robot ethics, too. As a consequence, robot ethics perpetuates the “work of purification,” that is, the isolation and definition of a particular entity possessing a moral status. Whenever such an entity is defined, the definition excludes all those entities which could likewise possess a moral status but do not fit exactly to the pre-specified definition. The crucial question, then, is whether to seek an ethic of unconditional compassion that doesn't allow itself to be restricted by ideology and is therefore convenient for animal rights and robot ethics as well.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.