Abstract

SummaryThe recent sale of an artificial intelligence (AI)-generated portrait for $432,000 at Christie's art auction has raised questions about how credit and responsibility should be allocated to individuals involved and how the anthropomorphic perception of the AI system contributed to the artwork's success. Here, we identify natural heterogeneity in the extent to which different people perceive AI as anthropomorphic. We find that differences in the perception of AI anthropomorphicity are associated with different allocations of responsibility to the AI system and credit to different stakeholders involved in art production. We then show that perceptions of AI anthropomorphicity can be manipulated by changing the language used to talk about AI—as a tool versus agent—with consequences for artists and AI practitioners. Our findings shed light on what is at stake when we anthropomorphize AI systems and offer an empirical lens to reason about how to allocate credit and responsibility to human stakeholders.

Highlights

  • On October 25, 2018, a portrait generated by a machine learning (ML) algorithm called a generative adversarial network (Goodfellow et al, 2014) sold at Christie’s art auction for $432,500

  • SUMMARY The recent sale of an artificial intelligence (AI)-generated portrait for $432,000 at Christie’s art auction has raised questions about how credit and responsibility should be allocated to individuals involved and how the anthropomorphic perception of the AI system contributed to the artwork’s success

  • We find that differences in the perception of AI anthropomorphicity are associated with different allocations of responsibility to the AI system and credit to different stakeholders involved in art production

Read more

Summary

Introduction

On October 25, 2018, a portrait generated by a machine learning (ML) algorithm called a generative adversarial network (or GAN) (Goodfellow et al, 2014) sold at Christie’s art auction for $432,500. Marketed by Christie’s as ‘‘the first portrait generated by an algorithm to come up for auction,’’ the painting—entitled Edmond De Belamy (see Figure 1)—struck a chord about the nature of authorship and artificial intelligence (AI) (Cohn, 2018). Even though AI played a role in generating the artwork, Edmond de Belamy would never have been produced without the help of humans. It was the Parisian art collective Obvious who selected, printed, marketed, and sold the image; but the human involvement does not stop there. Neither Barrat nor the ML researchers received any of the $432,500, which all went to Obvious

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call