Abstract

This paper addresses a thus-far neglected dimension in human-artificial intelligence (AI) augmentation: machine-induced reflections. By establishing a grounded theoretical-informed model of machine-induced reflection, we contribute to the ongoing discussion in information systems (IS) regarding AI and research on reflection theories. In our multistage study, physicians used a machine learning-based (ML) clinical decision support system (CDSS) to see if and how this interaction can stimulate reflective practice in the context of an X-ray diagnosis task. By analyzing verbal protocols, performance metrics, and survey data, we developed an integrative theoretical foundation to explain how ML-based systems can help stimulate reflective practice. Individuals engage in more critical or shallower modes depending on whether they perceive a conflict or agreement with these CDSS systems, which in turn leads to different levels of reflection depth. By uncovering the process of machine-induced reflections, we offer IS research a different perspective on how such AI-based systems can help individuals become more reflective, and consequently more effective, professionals. This perspective stands in stark contrast to the traditional, efficiency-focused view of ML-based decision support systems and also enriches theories on human-AI augmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call