Abstract
Language Models constitute an effective framework for text retrieval tasks. Recently, it has been extended to various collaborative filtering tasks. In particular, relevance-based language models can be used for generating highly accurate recommendations using a memory-based approach. On the other hand, the query likelihood model has proven to be a successful strategy for neighbourhood computation. Since relevance-based language models rely on user neighbourhoods for producing recommendations, we propose to use the query likelihood model for computing those neighbourhoods instead of cosine similarity. The combination of both techniques results in a formal probabilistic recommender system which has not been used before in collaborative filtering. A thorough evaluation on three datasets shows that the query likelihood model provides better results than cosine similarity. To understand this improvement, we devise two properties that a good neighbourhood algorithm should satisfy. Our axiomatic analysis shows that the query likelihood model always enforces those constraints while cosine similarity does not.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.