Abstract

The growing number of safety-critical technologized workplaces leads to enhanced support of complex human decision-making by artificial intelligence (AI), increasing the relevance of risk in the joint decision process. This online study examined participants' trust, attitude and behavior during a visual estimation task supported by either a human or an AI decision support agent. Throughout the online studyrisk levels were manipulated through different scenarios. Contrary to recent literature, no main effects were found in participants' trust attitude or trust behavior between support agent conditions or risk levels. However, participants using AI support exhibited increased trust behavior under higher risk, while participants with human support agents did not display behavioral differences. Self-confidence vs. trust and an increased feeling of responsibility may be possible reasons. Furthermore, participants reported the human support agent to be more responsible for possible negative outcomes of the joint task than the AI support agent. Hereby, risk did not influence perceived responsibility. However, the study's findings concerning trust behavior underscore the crucial importance of investigating the impact of risk in workplaces, shedding light on the under-researched effect of risk on trust attitude and behavior in AI-supported human decision-making.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call