Abstract

AbstractArtificial intelligence (AI), specifically machine learning (ML), is adept at identifying patterns and insights from vast amounts of data from routine outcome monitoring (ROM) and clinical feedback during treatment. When applied to patient feedback data, AI/ML models can assist clinicians in predicting treatment outcomes. Common reasons for clinician resistance to integrating data‐driven decision‐support tools in clinical practice include concerns about the reliability, relevance and usefulness of the technology coupled with perceived conflicts between data‐driven recommendations and clinical judgement. While AI/ML‐based tools might be precise in guiding treatment decisions, it might not be possible to realise their potential at present, due to implementation, acceptability and ethical concerns. In this article, we will outline the concept of eXplainable AI (XAI), a potential solution to these concerns. XAI refers to a form of AI designed to articulate its purpose, rationale and decision‐making process in a manner that is comprehensible to humans. The key to this approach is that end‐users see a clear and understandable pathway from input data to recommendations. We use real Norse Feedback data to present an AI/ML example demonstrating one use case for XAI. Furthermore, we discuss key learning points that we will employ in future XAI implementations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.