Abstract

Sequential recommendation mines the sequential patterns in user behavior data to recommend items to users. Recent studies have mainly followed the language modeling paradigm, on the premise that the next item depends on the sequence of previous items. Notably, differences exist between user behavior and textual data. One key difference is that behavioral sequences can encompass multiple intentions, unlike sentences that typically express a single intention. Furthermore, behavioral sequences emerge freely from users, whereas sentences conform to grammatical rules. This study highlights the risk of treating behavior sequences as a unified sequence, and the resultant potential for overfitting the observed transitions. We mitigated this risk by using subsequence extraction for recommendation (SSE4Rec). This model employs a subsequence extraction module that disperses items into distinct subsequences and groups of related items. Each subsequence is then processed by an independent downstream sequence model, which discourages the memorization of inconsequential transitions. Both the training and inference strategies are inherently integrated into the model. The proposed method was evaluated on four public datasets, whereby it was demonstrated to outperform publicly available alternatives or deliver competitive results. The properties of the model were also explored, further visualizing the output of the subsequence extraction module.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call