Abstract
The use of AI-enabled text-to-image generators, such as Midjourney and DALL-E, raises profound questions about the purpose, meaning, and value of images generally, and the production, editing, and consumption of images in journalism specifically. This study explores how photo editors (or their equivalents) in seven countries perceive and/or use generative visual AI in their editorial operations and outlines the challenges and opportunities they see for the technology. It also identifies the extent to which these news organizations have policies governing how generative visual AI is used or, if not, the principles that they feel should inform their development. Participants identified mis/disinformation as the primary challenge of AI-generated images, also raising concerns about labor and copyright implications, the difficulty or impossibility of detecting AI-generated images, the potential for algorithmic bias, and the potential reputational risk of using AI-generated images. Conversely, participants saw potential for using AI for illustrations and brainstorming, while a minority saw it as an opportunity to increase efficiencies and cut costs.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.