Artificial intelligence (AI)-based computational tools for deriving digital behavioral markers are increasingly able to automatically detect clinically relevant patterns in mood and behavior through algorithmic analysis of continuously and passively collected data. The integration of these technologies into clinical care is imminent, most notably in clinical psychology and psychiatry but also other disciplines (e.g., cardiology, neurology, neurosurgery, pain management). Meanwhile, ethical guidelines for implementation are lacking, as are insights into what patients and caregivers want and need to know about these technologies to ensure acceptability and informed consent. In this work, we present qualitative findings from interviews with 40 adolescent patients and their caregivers examining ethical and practical considerations for translating these technologies into clinical care. We observed seven key domains (in order of salience) in stakeholders’ informational needs: (1) clinical utility and value; (2) evidence, explainability, evaluation and contestation; (3) accuracy and trustworthiness; (4) data security, privacy, and misuse; (5) patient consent, control, and autonomy; (6) physician-patient relationship; and (7) patient safety, well-being, and dignity. Drawing from these themes, we provide a checklist of questions, as well as suggestions and key challenges, to help researchers and practitioners respond to what stakeholders want to know when integrating these technologies into clinical care and research. Our findings inform participatory approaches to co-designing treatment roadmaps for using these AI-based tools for enhanced patient engagement, acceptability and informed consent.
Read full abstract