Abstract

Health data uses are on the rise. Increasingly more often, data are used for a variety of operational, diagnostic, and technical uses, as in the Internet of Health Things. Never has quality data been more necessary: large data stores now power the most advanced artificial intelligence applications, applications that may enable early diagnosis of chronic diseases and enable personalized medical treatment. These data, both personally identifiable and de-identified, have the potential to dramatically improve the quality, effectiveness, and safety of artificial intelligence. Existing privacy laws do not 1) effectively protect the privacy interests of individuals and 2) provide the flexibility needed to support artificial intelligence applications. This paper identifies some of the key challenges with existing privacy laws, including the ineffectiveness of de-identification and data minimization protocols in practice and issues with notice and consent as they apply to artificial intelligence applications, then proposes an alternative privacy model. This model introduces a more restrictive approach to health privacy while introducing an interest balancing approach to data processing and retention to benefit individuals and the general public.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.