Abstract

Scholarship of uncertainty in artificial intelligence (AI) regulation has focused on theories, strategies, and practices to mitigate uncertainty. However, there is little understanding of how federal agencies communicate scientific uncertainties to all stakeholders including the public and regulated industries. This is important for three reasons: one, it highlights what aspects of the issue are quantifiable; two, it displays how agencies explain uncertainties about the issues that are not easily quantified; and three, it shows how knowledgeable agencies perceive the public audience in relation to the issue at hand and what they expect from such communication. By analyzing AI regulations across four categories of scientific uncertainties, this study found that uncertainty in areas of ownership, safety, and transparency are hard to quantify and hence agencies use personalized examples to explain uncertainties. In addition, agencies seek public input to gather additional data and derive consensus on issues that have moral implications. These findings are consistent with the literature on tackling uncertainty and regulatory decision-making. They can help advance our understanding of current practices of communicating science effectively to explain risks and uncertainties.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call