Abstract

ABSTRACT Artificial Intelligence (AI) fundamentally changes the way we work by introducing new capabilities. Human tasks shift towards a supervising role where the human confirms or disconfirms the presented decision. In this study, we utilise the signal detection theory to investigate and explain how the performance of human error detection is influenced by specific information design. We conducted two online experiments in the context of AI-supported information extraction and measured the ability of participants to validate the extracted information. In the first experiment, we investigated the mechanism of information provided prior to conducting the error detection task. In the second experiment, we manipulated the design of the presented information during the task and investigated its effect. Both manipulations significantly impacted the error detection performance of humans. Hence our study provides important insights for developing AI-based decision support systems and contributes to the theoretical understanding of human-AI collaboration.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call