Abstract

If artificial intelligence (AI) is to help solve individual, societal and global problems, humans should neither underestimate nor overestimate its trustworthiness. Situated in-between these two extremes is an ideal ‘Goldilocks’ zone of credibility. But what will keep trust in this zone? We hypothesise that this role ultimately falls to the social cognition mechanisms which adaptively regulate conformity between humans. This novel hypothesis predicts that human-like functional biases in conformity should occur during interactions with AI. We examined multiple tests of this prediction using a collaborative remembering paradigm, where participants viewed household scenes for 30 s vs. 2 min, then saw 2-alternative forced-choice decisions about scene content originating either from AI- or human-sources. We manipulated the credibility of different sources (Experiment 1) and, from a single source, the estimated-likelihood (Experiment 2) and objective accuracy (Experiment 3) of specific decisions. As predicted, each manipulation produced functional biases for AI-sources mirroring those found for human-sources. Participants conformed more to higher credibility sources, and higher-likelihood or more objectively accurate decisions, becoming increasingly sensitive to source accuracy when their own capability was reduced. These findings support the hypothesised role of social cognition in regulating AI’s influence, raising important implications and new directions for research on human–AI interaction.

Highlights

  • If artificial intelligence (AI) is to help solve individual, societal and global problems, humans should neither underestimate nor overestimate its trustworthiness

  • The identical pattern of influence suggests that similar underlying mechanisms regulate conformity to image classification decisions from AI and human sources, at least so far as we could detect here

  • Participants’ subjective ratings of their memory reliability relative to partner (Fig. 2b) showed a robust drop for images viewed for 30 s vs. 2 min (F(1, 98) = 30.87, p < 0.00001, ηp2 = 0.240), an effect that appears marginally pronounced for AI vs. human (F(1, 98) = 3.05, p = 0.084, ηp2 = 0.030)

Read more

Summary

Discussion

Objectively accurate decisions with likelihoods further improved the regulation of informational influence beyond that observed in experiment 2. To the extent that such features engage psychological mechanisms that mediate normative influence solely to enhance the user’s experience, they should presumably function without impact on the regulation of an AI’s informational influence In our view, this is not a safe assumption, its’ validity and limits should be empirically established, and the current framework offers a way to do this. Our experimental evidence strongly suggests that similar functional mechanisms regulate the influence of AI agents providing image recognition and classification decisions to support memory-based judgements, and we believe that this approach can and should be used more widely to investigate when an AI is operating within a Goldilocks zone and when it is not It seems inevitable, perhaps, that AI will have an increasing societal-level impact, and for that reason it is in our view important to pay attention to whether or not semi and partially autonomous AI operate within a Goldilocks zone of credibility. A notable example being the impact of AI on progress towards sustainable development ­goals[1,69] that entail the minimisation or elimination of such gaps

Results
Methods
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call