Abstract
Artificial intelligence (AI) is permeating one human endeavor after another. However, there is increasing concern regarding the use of AI: potential biases it contains, as well as mis-judged AI use. This study continues the recent investigations into the biases and issues that are potentially introduced into human decision-making with AI. We experimentally set-up a decision-making classification task and observe human classifiers when they are guided in their decision-making either by AI or other humans. We find that over-reliance or authoritative stigmatization is present when AI is concerned and that with human guidance discursive explanatory decision-making is present. We conclude that while AI is seen as authoritative even in a low stake decision-making setting, it does not suppress choice, but combined with a lack of transparency, AI suppresses visibility into rationality creation by the decision maker. Based on the emergent explorative relationships between types of rationality, AI transparency and authoritativeness, we provide future research avenues based on our findings.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.