Abstract

Artificial intelligence (AI) has become a part of the mainstream public discourse beyond expert communities about its risks, benefits, and need for regulation. In particular, since 2014, the news media have intensified their coverage of this emerging technology and its potential impact on most domains of society. Although many studies have analyzed traditional media coverage of AI, analyses of social media, especially video-sharing platforms, are rare. In addition, research from a risk communication perspective remains scarce, despite the widely recognized potential threats to society from many AI applications. This study aims to detect recurring patterns of societal threat/efficacy in YouTube videos, analyze their main sources, and compare detected frames in terms of reach and response. Using a theoretical framework combining framing and risk communication, the study analyzed the societal threat/efficacy attributed to AI in easily accessible YouTube videos published in a year when public attention to AI temporarily peaked (2018). Four dominant AI frames were identified: the balanced frame, the high-efficacy frame, the high-threat frame, and the no-threat frame. The balanced and no-threat frames were the most prevalent, with predominantly positive and neutral AI narratives that neither adequately address the risks nor the necessary societal response from a normative risk communication perspective. The results revealed the specific risks and benefits of AI that are most frequently addressed. Video views and user engagement with AI videos were analyzed. Recommendations for effective AI risk communication and implications for risk governance were derived from the results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call