Should the input data of artificial intelligence (AI) systems include factors such as race or sex when these factors may be indicative of morally significant facts? More importantly, is it wrong to rely on the output of AI tools whose input includes factors such as race or sex? And is it wrong to rely on the output of AI systems when it is correlated with factors such as race or sex (whether or not its input includes such factors)? The answers to these questions are controversial. In this paper, I argue for the following claims. First, since factors such as race or sex are not morally significant in themselves, including such factors in the input data, or relying on output that includes such factors or is correlated with them, is neither objectionable (for example, unfair) nor commendable in itself. Second, sometimes (but not always) there are derivative reasons against such actions due to the relationship between factors such as race or sex and facts that are morally significant (ultimately) in themselves. Finally, even if there are such derivative reasons, they are not necessarily decisive since there are sometimes also countervailing reasons. Accordingly, the moral status of the above actions is contingent.
Read full abstract