Abstract

Recommendation methods based on deep learning frameworks have drastically increased over recent years, covering virtually all the sub-topics in recommender systems. Among these topics, one-class collaborative filtering (OCCF) as a fundamental problem has been studied most extensively. However, most of existing deep learning-based OCCF methods are essentially focused on either defining new prediction rules by replacing conventional shallow and linear inner products with a variety of neural architectures, or learning more expressive user and item factors with neural networks, which may still suffer from the inferior recommendation performance due to the underlying preference assumptions typically defined on single items. In this paper, we propose to address the limitation and justify the capacity of deep learning-based recommendation methods by adapting the setwise preference to the underlying assumption during the model learning process. Specifically, we propose a new setwise preference assumption under the neural recommendation frameworks and devise a general solution named DeepSet, which aims to enhance the learning abilities of neural collaborative filtering methods by activating the setwise preference at different neural layers, namely 1) the feature input layer, 2) the feature output layer, and 3) the prediction layer. Extensive experiments on four commonly used datasets show that our solution can effectively boost the performance of existing deep learning based methods without introducing any new model parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call