Abstract
Should the input data of artificial intelligence (AI) systems include factors such as race or sex when these factors may be indicative of morally significant facts? More importantly, is it wrong to rely on the output of AI tools whose input includes factors such as race or sex? And is it wrong to rely on the output of AI systems when it is correlated with factors such as race or sex (whether or not its input includes such factors)? The answers to these questions are controversial. In this paper, I argue for the following claims. First, since factors such as race or sex are not morally significant in themselves, including such factors in the input data, or relying on output that includes such factors or is correlated with them, is neither objectionable (for example, unfair) nor commendable in itself. Second, sometimes (but not always) there are derivative reasons against such actions due to the relationship between factors such as race or sex and facts that are morally significant (ultimately) in themselves. Finally, even if there are such derivative reasons, they are not necessarily decisive since there are sometimes also countervailing reasons. Accordingly, the moral status of the above actions is contingent.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.