Abstract

Artificial intelligence models can process massive amounts of data and surpass human experts in predictions. However, the lack of trust in algorithms sealed in the “black box” is one of the most challenging barriers to taking advantage of AI in human decision-making. Improving algorithm transparency by presenting explanations is one of the most common approaches to curing this. Explainable artificial intelligence (XAI) has been a recent research focus, but most concentrate on explainable algorithm development rather than human factors. Thus, the objective of this study is twofold: (1) to explore whether or not XAI can improve human performance and trust in AI in the competitive tasks of sales prediction, and (2) to reveal the different impact routines XAI on individuals with different task-related capacities. Based on a quasi-experimental study, our results indicate that XAI can improve human decision accuracy in the scenario of sales prediction in cross-border e-commerce. XAI cannot improve self-report trust to AI but can improve behavioral trust. We also found the placebo effect of explanation for relatively low task-related capacity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call