Abstract

Understanding how people socially engage with robots is becoming increasingly important as these machines are deployed in social settings. We investigated 70 participants’ situational cooperation tendencies towards a robot using prisoner’s dilemma games, manipulating the incentives for cooperative decisions to be high or low. We predicted that people would cooperate more often with the robot in high-incentive conditions. We also administered subjective measures to explore the relationships between people’s cooperative decisions and their social value orientation, attitudes towards robots, and anthropomorphism tendencies. Our results showed incentive structure did not predict human cooperation overall, but did influence cooperation in early rounds, where participants cooperated significantly more in high-incentive conditions. Exploratory analyses further revealed that participants played a tit-for-tat strategy against the robot (whose decisions were random), and only behaved prosocially toward the robot when they had achieved high scores themselves. These findings highlight how people make social decisions when their individual profit is at odds with collective profit with a robot, and advance understanding on human–robot interactions in collaborative contexts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call