Abstract
Artificial intelligence systems based on deep learning architectures are being investigated as decision-support systems for human decision-makers across a wide range of decision-making contexts. It is known from the literature on AI in medicine that patients and the public hold relatively strong preferences in relation to desirable features of AI systems and their implementation, e.g. in relation to explainability and accuracy, and in relation to the role of the human decision-maker in the decision chain. The features that are preferred can be seen as 'protective' of the patient's interests. These types of preferences may plausibly vary across decision-making contexts, but the research on this question has so far been almost exclusively performed in relation to medical AI. In this cross-sectional survey study we investigate the preferences of the adult Danish population for five specific protective features of AI systems and implementation across a range of eight different use cases in the public and commercial sectors ranging from medical diagnostics to the issuance of parking tickets. We find that all five features are seen as important across all eight contexts, but that they are deemed to be slightly less important when the implications of the decision made are less significant to the respondents.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.