Abstract

Artificial intelligence (AI) enables a medical device to optimize its performance through machine learning (ML), including the ability to learn from past experiences. In healthcare, ML is currently applied within controlled settings in devices to diagnose conditions like diabetic retinopathy without clinician input, for instance. In order to allow AI-based medical devices (AIMDs) to adapt actively to its data environment through ML, the current risk-based regulatory approaches are inadequate in facilitating this technological progression. Recent and innovative regulatory changes introduced to regulate AIMDs as a software, or 'software as a medical device' (SaMD), and the adoption of a total device/product-specific lifecycle approach (rather than one that is point-in-time) reflect a shift away from the strictly risk-based approach to one that is more collaborative and participatory in nature, and anticipatory in character. These features are better explained by a rights-based approach and consistent with the human right to science (HRS). With reference to the recent explication of the normative content of HRS by the Committee on Economic, Social and Cultural Rights of the United Nations, this paper explains why a rights-based approach that is centred on HRS could be a more effective response to the regulatory challenges posed by AIMDs. The paper also considers how such a rights-based approach could be implemented in the form of a regulatory network that draws on a 'common fund of knowledges' to formulate anticipatory responses to adaptive AIMDs. In essence, the HRS provides both the mandate and the obligation for states to ensure that regulatory governance of high connectivity AIMDs become increasingly collaborative and participatory in approach and pluralistic in substance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call