Abstract

In the classic movie O Brother, Where Art Thou? set in the turn-of-the-20th-century South, an ominously unstoppable lawman and his posse track escaped prisoners with bloodhounds and rifles, finally catching up with those flawed and comical heroes at an old home place. About to be summarily executed, our heroes protest that they have been recently pardoned by the governor and that their imminent lynching would be illegal. Their arguable leader Ulysses Everett McGill exclaims, “It ain’t the law!” Unfazed, Sheriff Cooley, cruel eyes hidden behind lifeless black shades, juts his implacable jaw skyward and scoffs, “The law? The law is a human institution.” With those immortal words, the lawman at once justifies and pardons the illegality of the looming nooses on the grounds that the law is intrinsically fallible, being a human creation, and that fallibility is an irrefutable inevitability. Artificial intelligence is a human institution. It, machine learning, and all its algorithmic system variants (collectively, “artificial intelligence” or “AI”) are flawed and are subject to the discriminatory biases of those people who and organizations that design, code, test, and use them, including the judiciary, the administrative state, and other government institutions and private companies. Much of the multitudinous Big Data upon which algorithms feed and operate are likewise flawed and reflect discriminatory biases, including the historical biases baked into and persisting in the data from redlining and systematized racism in the Jim Crow era. AI deploys rapidly, at scale, and often worldwide. Consequently, discriminatory biases in AI has a tremendous and rapid potential for harm to the people who are the subjects of those systems, their families and communities, and to the societies of which they are a part. Because algorithmic determinations may be used, in turn, as inputs for yet other algorithmic systems, the harms propagate. The harms may be irreparable. The scale and gravity of the impact of discriminatory bias in machine learning and other artificial intelligence and algorithmic systems are enormous and represent potentially existential and global threats to a more perfect union of humanity. Artificial intelligence is a human institution, but that does not mean that we should accept its misdesign and misuse as inevitable failings that may be simply written off as the cost of “doing business.” Where people and organizations invent and seek to deploy such tools, they and public policy leaders must find ways of minimize the amplified risks of new and irreparable harms associated with those tools. How then can discriminatory biases in machine learning and other artificial intelligence systems be detected and eradicated? How can we ensure that, despite their status as a “human institution,” algorithmic systems are purged of their human-derived flaws and rendered just? These questions and others resonate, leading to the ultimate question of algorithmic justice: What is the Algorithmic State of our Nation, and how may it be ascertained and continually improved?

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call