Abstract

This article theorises, within the context of the law of England and Wales, the potential outcomes in negligence claims against clinicians and software development companies (SDCs) by patients injured due to AI system (AIS) use with human clinical supervision. Currently, a clinician will likely shoulder liability via a negligence claim for allowing defects in an AIS’s outputs to reach patients. We question if this is ‘fair, just and reasonable’ to clinical users: we argue that a duty of care to patients ought to be recognised on the part of SDCs as well as clinicians. As an alternative to negligence claims, we propose ‘risk pooling’ which utilises insurance. Here, a fairer construct of shared responsibility for AIS use could be created between the clinician and the SDC; thus, allowing a rapid mechanism of compensation to injured patients via insurance.

Highlights

  • This article theorises, within the context of the law of England and Wales, the potential outcomes in negligence claims against clinicians and software development companies (SDCs) by patients injured due to AI system (AIS) use with human clinical supervision

  • We will argue that this situation is unfair to the clinical user as the clinical decision-making space has been modified by the SDC via their AIS

  • If the system’s outputs consisted of recommendations that were illogical, the claimant may be able to show that the duty of care had been breached by the clinician who acted on that recommendation.[21]

Read more

Summary

Artificial intelligence and healthcare

Decision-making for patients in the clinical environment has historically been led by the clinical professions. An AIS may be designed to learn from its experiences and adjust its outputs without being programmed to do so (machine learning).[3] The process by which the system calculates its outputs could be sufficiently complex to effectively render it inscrutable to a nonexpert user, a black box[4] in common parlance These characteristics increase the risk that an AIS could produce a clinically inappropriate recommendation and that the defective logic involved goes undetected. There is currently a lack of clarity surrounding the sufficiency of legal mechanisms for liability when applied to malfunctioning AISs. In 2017, House of Lords Select Committee recommended that the Law Commission investigates whether current legal principles were adequate to address liability issues when using AI and to make recommendations in this area,[5] but a formal reference has not yet been made by the Government.

Key stakeholders
Clinical negligence
Duty of care
Breach of duty
Using volenti as a defence
Risk pooling
Song defines risk pooling as
Advantages of risk pooling
Considerations for the future and conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.