Abstract

One of the challenges of AI technologies is its "black box" nature, or the lack of explainability and interpretability of these technologies. This chapter explores whether AI systems in healthcare generally, and in neurosurgery specifically, should be explainable, for what purposes, and whether the current XAI ("explainable AI") approaches and techniques are able to achieve these purposes. The chapter concludes that XAI techniques, at least currently, are not the only and not necessarily the best way to achieve trust in AI and ensure patient autonomy or improved clinical decision, and they are of limited significance in determining liability. Instead, we argue, we need more transparency around AI systems, their training and validation, as this information is likely to better achieve these goals.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.