Abstract

AbstractRecently, artificial intelligence (AI) systems have been widely used in different contexts and professions. However, with these systems developing and becoming more complex, they have transformed into black boxes that are difficult to interpret and explain. Therefore, urged by the wide media coverage of negative incidents involving AI, many scholars and practitioners have called for AI systems to be transparent and explainable. In this study, we examine transparency in AI-augmented settings, such as in workplaces, and perform a novel analysis of the different jobs and tasks that can be augmented by AI. Using more than 1000 job descriptions and 20,000 tasks from the O*NET database, we analyze the level of transparency required to augment these tasks by AI. Our findings indicate that the transparency requirements differ depending on the augmentation score and perceived risk category of each task. Furthermore, they suggest that it is important to be pragmatic about transparency, and they support the growing viewpoint regarding the impracticality of the notion of full transparency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.