Abstract

Growing prevalence of algorithmic systems and artificial intelligence in news production has prompted concerns over journalists’ ability to understand and engage with them in ways that do not compromise journalistic norms and values. This ‘intelligibility’ issue is particularly acute for public service media due to the risks such complex and opaque systems pose for disrupting accountability, decision-making, and professional judgment. This article draws from document analysis and interviews with fourteen journalists to outline where AI is deployed in BBC news production and analyse how journalists make sense of AI and algorithms. We find a disconnect between increasingly pervasive AI and the level of understanding amongst BBC journalists, who are using guesswork and imagination in place of accurate conceptions of these technologies. This could limit journalists’ ability to effectively and responsibly use AI systems, to question their outputs and role in news production, or to adapt and shape them – and could also hinder responsible reporting on how AI impacts society. We recommend PSM develop strategies for fostering AI intelligibility and literacy on three levels: individual, organisational, and community, and we reframe the AI intelligibility problem in sociocultural rather than solely technical terms in order to better address normative considerations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call