The role of social robots as advisors for decision-making is investigated. We examined how a robot advisor with logical reasoning and one with cognitive fallacies affected participants’ decision-making in different contexts. The participants were asked to make multiple decisions while receiving advice from both robots during the decision-making process. Participants had to choose which robot they agreed with and, at the end of the scenario, rank the possible options presented to them. After the interaction, participants were asked to assign jobs to the robots, e.g. jury or bartender. Based on the ‘like-me’ hypothesis and previous research of social mitigation of fallacious judgmental decisions, we have compared participants’ agreement with the two robots for each scenario to random choice using t-tests, as well as analysed the dynamical nature of the interaction, e.g. whether participants changed their choices based on the robots’ verbal opinion using Pearson correlations. Our results show that the robots had an effect on the participants’ responses, regardless of the robots’ fallaciousness, wherein participants changed their decisions based on the robot they agreed with more. Moreover, the context, presented as two different scenarios, also had an effect on the preferred robots, wherein an art auction scenario resulted in significantly increased agreement with the fallacious robot, whereas a detective scenario did not. Finally, an exploratory analysis showed that personality traits, e.g. agreeableness and neuroticism, and attitudes towards robots had an impact on which robot was assigned to these jobs. Taken together, the results presented here show that social robots’ effects on participants’ decision-making involve complex interactions between the context, the cognitive fallacies of the robot and the attitudes and personalities of the participants and should not be considered a single psychological construct.
Read full abstract