Abstract
This study investigates the impact of Large Language Model (LLM)-based chatbots, specifically in the context of political information retrieval, using the 2024 Taiwan presidential election as a case study. With the rapid integration of LLMs into search engines like Google and Microsoft Bing, concerns about information quality, algorithmic gatekeeping, biases, and content moderation emerged. This research aims to (1) assess the alignment of AI chatbot responses with factual political information, (2) examine the adherence of chatbots to algorithmic norms and impartiality ideals, (3) investigate the factuality and transparency of chatbot-sourced synopses, and (4) explore the universality of chatbot gatekeeping across different languages within the same geopolitical context. Adopting a case study methodology and prompting method, the study analyzes responses from Microsoft’s LLM-powered search engine chatbot, Copilot, in five languages (English, Traditional Chinese, Simple Chinese, German, Swedish). The findings reveal significant discrepancies in content accuracy, source citation, and response behavior across languages. Notably, Copilot demonstrated a higher rate of factual errors in Traditional Chinese while exhibiting better performance in Simplified Chinese. The study also highlights problematic referencing behaviors and a tendency to prioritize certain types of sources, such as Wikipedia, over legitimate news outlets. These results underscore the need for enhanced transparency, thoughtful design, and vigilant content moderation in AI technologies, especially during politically sensitive events. Addressing these issues is crucial for ensuring high-quality information delivery and maintaining algorithmic accountability in the evolving landscape of AI-driven communication platforms.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have