Abstract

Abstract Increased use of machine-learning algorithms in all aspects of society has worsened the impact of online behavioral biases on historically disadvantaged populations, enabling discriminatory practices in hiring, loan approvals, child welfare determinations, access to public benefits, policing, and more. While companies have sought to self-regulate such problems, they have failed to enact concrete change. To address this problem, an incentive-based rating system, which is modeled, in part, after the United States federal government’s Energy Star program, is proposed to better identify and mitigate online biases. Rating systems can implore computer and data scientists, as well as the industries that license and disseminate algorithms, to improve their interrogation of algorithms’ sociological implications and incorporate non-technical actors and practices to inform their design and execution. Such incentives can differentiate the developers and companies who seek transparency and feedback around their concerns for more fair, inclusive, and ethical artificial intelligence systems. In addition to promoting the use of rating systems, this chapter offers a framework for them when applied to AI, including a set of questions that can be asked of internal and external stakeholders, like those assessing the diversity of the work team, data quality, transparency, and feedback from user groups. The chapter concludes with a discussion around the role of public policies and potential regulatory enforcements that can make AI rating systems more formidable, especially in use cases that have adverse impact on users.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call