Abstract

The moral standing of robots and artificial intelligence (AI) systems has become a widely debated topic by normative research. This discussion, however, has primarily focused on those systems developed for social functions, e.g., social robots. Given the increasing interdependence of society with nonsocial machines, examining how existing normative claims could be extended to specific disrupted sectors, such as the art industry, has become imperative. Inspired by the proposals to ground machines’ moral status on social relations advanced by Gunkel and Coeckelbergh, this research presents online experiments (∑N = 448) that test whether and how interacting with AI-generated art affects the perceived moral standing of its creator, i.e., the AI-generative system. Our results indicate that assessing an AI system’s lack of mind could influence how people subsequently evaluate AI-generated art. We also find that the overvaluation of AI-generated images could negatively affect their creator’s perceived agency. Our experiments, however, did not suggest that interacting with AI-generated art has any significant effect on the perceived moral standing of the machine. These findings reveal that social-relational approaches to AI rights could be intertwined with property-based theses of moral standing. We shed light on how empirical studies can contribute to the AI and robot rights debate by revealing the public perception of this issue.

Highlights

  • As robots and artificial intelligence (AI) systems become widespread, scholars have questioned whether society should have any responsibility towards them

  • Whether participants interacted with AI-generated images before or after attributing moral agency and patiency to the system did not influence its perceived moral standing

  • Study participants ascribed the ability to create art to the AI system it was not described as an “artist,” nor their outputs were introduced as “art.” This specific artistic notion of the agency was perceived as more significant to the AIgenerative system than the more general conception of agency captured by the mind perception questionnaire

Read more

Summary

Introduction

As robots and artificial intelligence (AI) systems become widespread, scholars have questioned whether society should have any responsibility towards them. Scholars have expressed a plurality of views on this topic Those who oppose the prospect denounce the idea by arguing that these entities are ontologically different from humans (Miller, 2015). Some scholars propose that robots and AI systems should matter morally if they develop consciousness or sentience (Torrance, 2008). Extensive literature has questioned who should be responsible for the actions of artificial intelligence (AI) and robotic systems. Some scholars argue that sentience and consciousness are necessary conditions for moral patiency (Bernstein, 1998) These views are rarely agreed upon, in the literature discussing the moral status of non-humans (Gellers, 2020)

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.