ObjectiveContinuously learning or adaptive artificial intelligence (AI) applications for screening, diagnostic and other clinical services are yet to be widely deployed. This is partly due to existing device regulation mechanisms that are not fit for purpose regarding the adaptive features of AI. This study aims to identify the challenges in and opportunities for the regulation of adaptive features of AI. Materials and MethodsWe performed in-depth qualitative, semi-structured interviews with a diverse group of 72 experts in high-income countries (Australia, Canada, New Zealand, US, and UK) who are involved in the development, acquisition, deployment and regulation of healthcare AI systems. ResultsOur findings revealed perceived challenges in the regulation of adaptive features of machine learning (ML) systems. These challenges include the complexity of AI applications as products subject to regulation; lack of accepted definitions of adaptive changes; diverse approaches to defining significant adaptive change; and lack of clarity about regulation of adaptive change. Our findings reflect potentially competing interests among different stakeholders and diversity of approaches from regulatory bodies and legislators in different jurisdictions across the globe. In addition, our findings highlight the complex regulatory implications of adaptive AI that differ from traditional medical products, drugs or devices. ConclusionThe perceived regulatory challenges raised by adaptive features of AI applications require high-level coordination within a complex regulatory ecosystem that consists of medical device regulators, professional accreditation agencies, professional medical organisations, and healthcare service providers. Regulatory approaches should complement existing safety protocols with new governance mechanisms that specifically take into account the variety of roles and responsibilities that will be required to monitor, evaluate and oversee adaptive changes.