Abstract

The regulation of artificial intelligence (AI) has heavily relied on ex post, reactive tools. This approach has proven inadequate, as numerous foreseeable problems arising out of commercial development and applications of AI have harmed vulnerable persons and communities, with few (and sometimes no) opportunities for recourse. Worse problems are highly likely in the future. By requiring quality control measures before AI is deployed, an ex ante approach would often mitigate and sometimes entirely prevent injuries that AI causes or contributes to. Licensing is an important tool of ex ante regulation, and should be applied in many high-risk domains of AI. Indeed, policymakers and even some leading AI developers and vendors are calling for licensure in the area.To substantiate licensing proposals, this article specifies optimal terms of licensure for AI necessary to justify its use. Given both documented and potential harms arising out of high-risk AI systems, licensing agencies should require firms to demonstrate that their AI meets clear requirements for security, non-discrimination, accuracy, appropriateness, and correctability before being deployed. Under this ex ante model of regulation, AI developers would bear the burden of proof to demonstrate that their technology is not discriminatory, not manipulative, not unfair, not inaccurate, and not illegitimate in its lawful bases and purposes. While the European Union's General Data Protection Regulation (GDPR) can provide key benchmarks here for ex post regulation, the proposed AI Act (AIA) offers a first regulatory attempt towards an ex ante licensure regime in high-risk areas, but it should be strengthened through an expansion of its scope and substantive content and through greater transparency of the ex ante justification process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call