Many newsrooms around the world are increasingly turning to artificial intelligence (AI) algorithms to generate journalistic content. Often, these machine-generated texts are distributed without being clearly identified as synthetic or hybrid. Since the launch of ChatGPT in late 2022, the tool’s extraordinary ability to mimic human language has been widely celebrated. Given that subjectivity is an integral part of human language, this study examines how different texts generated using AI tools are imbued with subjective features in order to anthropomorphize their linguistic content. Our aim is to gain insight into the ways in which these texts express subjectivity in order to appear anthropomorphic, as well as the limits of this expression and its implications for communication. To this end, a corpus of AI-generated journalistic texts published in various media, as well as texts created using AI tools such as ChatGPT and Gemini, is analysed to assess these tools’ capabilities. Ten criteria are used to characterize the expression of subjectivity in journalistic discourse on the surface of texts and in terms of situational appropriateness. The results show that AI tools can incorporate subjective markers on the text surface, but have important limitations when it comes to situational appropriateness, making it difficult to imitate certain features of journalistic writing. The paper also discusses the implications of asymmetrical audience interaction with machines that simulate human characteristics, and the varying degrees of opacity and transparency with which AI is used in newsrooms.
Read full abstract