Abstract

Providing reasonable explanations for a specific suggestion given by the recommender can help users trust the system more. As logic rule-based inference is concise, transparent, and aligned with human cognition, it can be adopted to improve the interpretability of recommendation models. Previous work that interprets user preference with logic rules merely focuses on the construction of rules while neglecting the usage of feature embeddings. This limits the model in capturing implicit relationships between features. In this paper, we aim to improve both the effectiveness and explainability of recommendation models by simultaneously representing logic rules and feature embeddings. We propose a novel model-intrinsic explainable recommendation method named Feature-Enhanced Neural Collaborative Reasoning ( FENCR ). The model automatically extracts representative logic rules from massive possibilities in a data-driven way. In addition, we utilize feature interaction-based neural modules to represent logic operators on embeddings. Experiments on two large public datasets show our model outperforms state-of-the-art neural logical recommendation models. Further case analyses demonstrate that FENCR can derive reasonable rules, indicating its high robustness and expandability 1 .

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.