Abstract

Artificial Intelligence (AI) is often viewed as the means by which the intelligence community will cope with the increasing amount of information available to them. Trust is a complex, dynamic phenomenon, which drives adoption (or disuse) of technology. We conducted a naturalistic study with intelligence professionals (planners, collectors, analysts, etc.) to understand trust dynamics with AI systems. We found that on a long-enough time scale, trust in AI self-repaired after incidents where trust was lost, usually based merely on the assumption that AI had improved since participants last interacted with it. Similarly, we found that trust in AI increased over time after incidents where trust was gained in the AI. We termed this general trend “buoyant trust in AI,” where trust in AI tends to increase over time, regardless of previous interactions with the system. Key findings are discussed, along with possible directions for future research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call