Abstract

ABSTRACTVast amounts of news are consumed through algorithmically curated media environments, such as search engines, social networking sites, or news aggregators. This renders algorithmic content curation with much societal relevance and highlights the urgent need for independent and resilient academic research. Therefore, a plethora of methodological approaches have been applied, such as case studies, expert interviews, observations, or agent-based approaches. The paper discusses the applicability of these methodological efforts for journalism studies, showing that all of these approaches face their limitations, especially with regard to external validity, recruitment difficulties, and data reliability. Thereby, agent-based testing represents one of the most promising approaches to overcome plenty of these methodological limitations. Agent-based testing is a systematic and experimental approach that emulates online human behavior to test algorithmically curated media environments under various conditions. For this to be achieved properly, this paper suggests a multitude of settings and requirements to adequately face the technological, legal, and ethical challenges, which come with the empirical investigation of algorithmic content curation. Ultimately, the paper presents both general considerations and practical instructions (using the “ScrapeBot”) to employ agent-based testing for journalism studies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call