Abstract

Using complex language models has been a common strategy as personalized news recommendation systems are adopted by online news sites more and more. Before the epoch of GPT-3, news recommendation systems underwent a progression from rule-based and collaborative filtering approaches in the pre-2010s, through the integration of neural networks in the 2010s, to the emergence of earlier iterations of large language models like GPT-2 in 2019. Pre-trained language models have ushered in a new era of recommendation paradigms, thanks to the emergence of huge language models like GPT-3 and T-5. With its easy-to-use interface, ChatGPT is becoming more and more popular for text-based jobs. Focusing on news provider fairness, individualized news recommendations, and fake news identification, this study starts an inquiry into ChatGPT's efficacy in news recommendations. We acknowledge that ChatGPT's output sensitivity to input phrasing is a limitation, and our goal is to investigate these limitations from each angle. We also investigate whether certain prompt formats can help to alleviate these constraints or if more research is needed. To go beyond set assessments, we create a webpage where ChatGPT's performance on the examined activities and prompts is tracked once a week. Through the use of big language models, this work seeks to improve news recommendation performance and stimulate more research in this area.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call