Abstract

The use of AI-enabled text-to-image generators, such as Midjourney and DALL-E, raises profound questions about the purpose, meaning, and value of images generally, and the production, editing, and consumption of images in journalism specifically. This study explores how photo editors (or their equivalents) in seven countries perceive and/or use generative visual AI in their editorial operations and outlines the challenges and opportunities they see for the technology. It also identifies the extent to which these news organizations have policies governing how generative visual AI is used or, if not, the principles that they feel should inform their development. Participants identified mis/disinformation as the primary challenge of AI-generated images, also raising concerns about labor and copyright implications, the difficulty or impossibility of detecting AI-generated images, the potential for algorithmic bias, and the potential reputational risk of using AI-generated images. Conversely, participants saw potential for using AI for illustrations and brainstorming, while a minority saw it as an opportunity to increase efficiencies and cut costs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call