Abstract

Several technological developments, such as self-service technologies and artificial intelligence (AI), are disrupting the retailing industry by changing consumption and purchase habits and the overall retail experience. Although AI represents extraordinary opportunities for businesses, companies must avoid the dangers and risks associated with the adoption of such systems. Integrating perspectives from emerging research on AI, morality of machines, and norm activation, we examine how individuals morally behave toward AI agents and self-service machines. Across three studies, we demonstrate that consumers’ moral concerns and behaviors differ when interacting with technologies versus humans. We show that moral intention (intention to report an error) is less likely to emerge for AI checkout and self-checkout machines compared with human checkout. In addition, moral intention decreases as people consider the machine less humanlike. We further document that the decline in morality is caused by less guilt displayed toward new technologies. The non-human nature of the interaction evokes a decreased feeling of guilt and ultimately reduces moral behavior. These findings offer insights into how technological developments influence consumer behaviors and provide guidance for businesses and retailers in understanding moral intentions related to the different types of interactions in a shopping environment.

Highlights

  • Several technological developments, such as the Internet, self-service technologies, and artificial intelligence (AI), have entirely changed the face of retailing, altering purchase and consumption patterns and interactions within the retailAs a result of their increased availability, retailers and service companies are using those technologies to predict consumers’ preferences and create a fast, smooth, and convenient experience for customers

  • We show that moral intention is less likely to emerge for self-checkout machines and AI checkout compared with human checkout

  • Participants’ perceived guilt for not reporting the billing error was greater in the human agent versus AI agent condition

Read more

Summary

Introduction

Several technological developments, such as the Internet, self-service technologies, and artificial intelligence (AI), have entirely changed the face of retailing, altering purchase and consumption patterns and interactions within the retail. These studies show that people apply different morality norms to humans and robots Extending these findings, Shank et al (2019) investigated the role of decision-making structures (individual decisionmaking vs joint decision-making) involving moral violations by AI and human agents. Contradictory views about the moral treatments of machines and AI suggest that moral relations with those agents are not equivalent to human interactions (Bryson, 2010; Gunkel, 2018; Levy, 2009) Another important factor that can explain those differences is along the two distinct dimensions of mind: agency and experience (Gray et al, 2017). We hypothesize the following: H3 Perceived guilt from not behaving morally mediates H1 and H2

Methodology
Results and Discussion
Moral intenƟon
General Discussion
Limitations and Directions for Future Research
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call