Abstract

Recommender systems are among today's most successful application areas of artificial intelligence. However, in the recommender systems research community, we have fallen prey to a McNamara fallacy to a worrying extent: In the majority of our research efforts, we rely almost exclusively on computational measures such as prediction accuracy, which are easier to make than applying other evaluation methods. However, it remains unclear whether small improvements in terms of such computational measures matter greatly and whether they lead us to better systems in practice. A paradigm shift in terms of our research culture and goals is therefore needed. We can no longer focus exclusively on abstract computational measures but must direct our attention to research questions that are more relevant and have more impact in the real world. In this work, we review the various ways of how recommender systems may create value; how they, positively or negatively, impact consumers, businesses, and the society; and how we can measure the resulting effects. Through our analyses, we identify a number of research gaps and propose ways of broadening and improving our methodology in a way that leads us to more impactful research in our field.

Highlights

  • Recommendation as a Matrix Completion ProblemThe usual input for offline experiments in recommender systems research is a sparse user-item interaction matrix M, describing, e.g., how users rated items or whether they purchased a certain item

  • We review the various ways of how recommender systems may create value; how they, positively or negatively, impact consumers, businesses, and the society; and how we can measure the resulting effects

  • The success of recommender systems in practice has led to a tremendous academic interest in this area and recommender systems — which may be considered one of the most visible applications of machine learning and artificial intelligence — have become their own research field

Read more

Summary

Recommendation as a Matrix Completion Problem

The usual input for offline experiments in recommender systems research is a sparse user-item interaction matrix M, describing, e.g., how users rated items or whether they purchased a certain item. The following matrix shows the ratings that were given by four users on five items

Missing Ratings
Indirect impact
The Multiple Stakeholders of Recommender Systems
Booking Platform
Purpose and Value of Recommender Systems
Consumer Value
Increase brand equity
Organizational Value
Loss of Societal Trust
Rethinking Our Research Approach
Choosing Evaluation Designs with Goal and Purpose in Mind
↓ Evaluation Approach
Effect on Sales Distribution
Improved Offline Evaluations
Use Intentions
Summary

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.