Abstract

The temporal order of user behaviors, which implies the user's preference in the near future, plays a key role in sequential recommendation systems. To capture such patterns from user behavior sequences, many recent works borrow ideas from language models and consider it a next item prediction problem. It is reasonable, but the gap between the user behavior data and the text data is ignored. Generally speaking, user behaviors are more arbitrary than sentences in natural languages. A behavior sequence usually carries multiple intentions, and the exact order does not matter a lot. But a sentence in a text tends to express one meaning and different orders of the words may bring very different meanings. To address these issues, this study considers user behavior as a mixture of multiple subsequences. Specifically, we introduce a subsequence extraction module, which assigns the items in a sequence into different subsequences, with respect to their relationship. Then these subsequences are fed into the downstream sequence model, from which we obtain several user representations. To train the whole system in an end-to-end manner, we design a new training strategy where only the user representation near the target item gets supervised. To verify the effectiveness of our method, we conduct extensive experiments on four public datasets. It is compared with several baselines and achieves better results in most cases. Further experiments explore the properties of our model and we also visualize the result of the subsequence extraction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call