Abstract

Various AI systems have taken a unique space in our daily lives, helping us in decision-making in critical as well as non-critical scenarios. Although these systems are widely adopted across different sectors, they have not been used to their full potential in critical domains such as the healthcare sector enabled by the Internet of Things (IoT). One of the important hindering factors for adoption is the implication for accountability of decisions and outcomes affected by an AI system, where the term accountability is understood as a means to ensure the performance of a system. However, this term is often interpreted differently in various sectors. Since the EU GDPR regulations and the US congress have emphasised the importance of enabling accountability in AI systems, there is a strong demand to understand and conceptualise this term. It is crucial to address various aspects integrated with accountability and understand how it affects the adoption of AI systems. In this paper, we conceptualise these factors affecting accountability and how it contributes to a trustworthy healthcare AI system. By focusing on healthcare IoT systems, our conceptual mapping will help the readers understand what system aspects those factors are contributing to and how they affect the system trustworthiness. Besides illustrating accountability in detail, we also share our vision towards causal interpretability as a means to enhance accountability for healthcare AI systems. The insights of this paper shall contribute to the knowledge of academic research on accountability, and benefit AI developers and practitioners in the healthcare sector.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call