Abstract
Big data phenotyping in rare diseases: some ethical issues
Highlights
All apply to the use of machine learning algorithms (MLAs) in general,[7,8,9] and their use in other healthcare contexts, and will become more relevant for those working in genetics research and clinical practice as computational phenotyping tools are increasingly deployed
Methods of data procurement can induce bias in machine learning algorithms (MLAs) where the resulting training sets are too homogeneous and fail to reflect real world diversity. This problem is pertinent in the case of computational phenotyping for rare disease because MLAs need to be able to distinguish disease from nondisease-related phenotypes, and can only do so if exposed to a wide spread of phenotypic variation
Combining differing datasets containing sensitive personal information may result in unexpectedincidental findings (IFs), which are unrelated to the primary research or clinical question
Summary
All apply to the use of MLAs in general,[7,8,9] and their use in other healthcare contexts, and will become more relevant for those working in genetics research and clinical practice as computational phenotyping tools are increasingly deployed. The first is the potential for MLAs to develop algorithmic bias, which may lead to social discrimination and result in inequitable access to healthcare.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.