Abstract

Artificial intelligence (AI) is increasingly being developed and implemented in healthcare. This presents privacy issues since many AI systems are privately owned and rely on data sharing arrangements for mass quantities of patient health information. We investigated the Canadian legal and policy framework focusing on regulation relevant to the potential for inappropriate use or disclosure of personal health information by private AI companies. This included analysis of federal and provincial legislation, common law and research ethics policy. Our evaluation of the various regulatory frameworks found that together they require private AI companies and their partners in healthcare implementation to meet high standards of privacy protection that prioritize patient autonomy, with limited exceptions. We found that healthcare AI systems are required to be consistent with the rules and foundational ethical norms enshrined in law and research ethics, even if this poses challenges to implementation. Data sharing arrangements must focus on tight integration with high levels of data security, strong oversight and retention of patient control over data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.