Abstract

Much has been written about artificial intelligence, both from the perspective of possibilities and opportunities as well as from the perspective of risks and limitations. Here I make a simple point — evaluating the appropriateness of an algorithm requires understanding the domain space in which it will operate. While data science enables one to transcend expertise in a particular domain, it nevertheless requires a deep familiarity with the question it is required to answer. Focusing on the answer rather than the question presents significant dangers. These are not necessarily physical hazards, but rather dangers to things like social norms, rule of law values, and the experience of equality. Deploying algorithms that do not avoid these dangers risks injustice in individual cases as well as generating longer term threats to fundamental social and democratic values. Consider the context of criminal justice. Risk assessment tools are increasingly used, particularly in the United States, to make decisions about bail, parole, and sentencing.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.