The aliens land. Imagine beneficent aliens who come in peace. One of their first acts is to provide us with an analytics/artificial intelligence program that predicts future thought and action. Call the program AI, for alien intelligence. We have no explanation or understanding of why AI predicts what it does. Even the best human computer science experts find large parts of the AI program completely unintelligible. It appears to involve programming and statistical techniques unknown to us. Its predictions are more accurate than ours but, like ours, still have a fairly high error rate. Humans — businesses, governments, and individuals — embrace the program, and many (humans) propose using AI systematically in the widest possible range of contexts as a basis for prediction and action. We contend that it would be extremely unwise to do so. To the extent that human-created predictive systems are similar to AI, it is also unwise to use them across a wide range. We assume AI has three features, all shared to some extent with human-created systems. First, it is unintelligible. We cannot figure out how or why it reaches the conclusions it does. Second, like human-created systems, AI analyzes extensive data to detect statistical regularities that hold for people in certain categories. To do so, it abstracts from the contextually rich narratives that render people’s individual arcs through the world intelligible. This makes significant misclassification inevitable. Third, the aliens caution us that AI cannot detect its own misclassification. There are no feedback mechanisms that detect and correct errors. Given these features, it would be a serious mistake to use AI across the board as a basis for prediction and action. To begin with, AI will create winners and losers — a very large number of winners and losers since AI governs the widest possible range of contexts. Once negatively categorized, losers will face great difficulty in escaping the categorizations that condemn them to that role. The high error rate means that many of the categorizations are wrong, and the lack of feedback ensures that AI will not correct its errors. AI’s unintelligibility means that there is no way to explain to losers why such treatment is not capricious and arbitrary. Such a predictive system is both profoundly unjust and a serious threat to social stability. This raises three questions. To what extent are current human-created systems like AI? What can we ensure that our current systems do not have the objectionable features of AI? And, to what extent and in which cases should we forgo the use of our systems?
Read full abstract