Abstract

There has been growing investment in artificial intelligence (AI) interventions to combat the opioid-driven overdose epidemic plaguing North America. Although the evidence for the use of technology and AI in medicine is mounting, there are a number of ethical, social, and political implications that need to be considered when designing AI interventions. In this commentary, we describe 2 key areas that will require ethical deliberation in order to ensure that AI is being applied ethically with socially vulnerable populations such as people who use drugs: (1) perpetuation of biases in data and (2) consent. We offer ways forward to guide and provide opportunities for interventionists to develop substance use-related AI technologies that account for the inherent biases embedded within conventional data systems. This includes a discussion of how other data generation techniques (eg, qualitative and community-based approaches) can be integrated within AI intervention development efforts to mitigate the limitations of relying on electronic health record data. Finally, we emphasize the need to involve people who use drugs as stakeholders in all phases of AI intervention development.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.