Abstract

Many online retailers, such as Amazon, use automated product recommender systems to encourage customer loyalty and cross-sell products. Despite significant improvements to the predictive accuracy of contemporary recommender system algorithms, they remain prone to errors. Erroneous recommendations pose potential threats to online retailers in particular, because they diminish customers’ trust in, acceptance of, satisfaction with, and loyalty to a recommender system. Explanations of the reasoning that lead to recommendations might mitigate these negative effects. That is, a recommendation algorithm ideally would provide both accurate recommendations and explanations of the reasoning for those recommendations. This article proposes a novel method to balance these concurrent objectives. The application of this method, using a combination of content-based and collaborative filtering, to two real-world data sets with more than 100 million product ratings reveals that the proposed method outperforms established recommender approaches in terms of predictive accuracy (more than five percent better than the Netflix Prize winner algorithm according to normalized root mean squared error) and its ability to provide actionable explanations, which is also an ethical requirement of artificial intelligence systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.