Abstract

Sequential recommendation has been a popular research topic in recent times, aiming to predict the next item in a user’s sequence based on their past behaviors. Self-Attention (SA)-based models have shown state-of-the-art performance in this domain. These SA-based models adopt vanilla self-attention mechanism, which takes every single item as the minimum modeling unit and is sufficient to capture the point-level relation: several previously interacted items affecting the target item individually. However, we argue that vanilla self-attention mechanism in existing SA-based models neglects the collective influence of a group of items and thus cannot explicitly capture union-level relation: several previous items affecting the target items jointly. To address this limitation, we propose Multi-Granularity Transformer (MGT) that leverages both point-level and union-level relation for sequential recommendation. The proposed MGT employs a new multi-granularity self-attention (MGSA) mechanism that simultaneously captures multi-level relation (point-level and union-level relation). Specifically, MGSA partitions item latent space into different attention heads and forces different attention heads to account for point-level and union-level relation, respectively. Moreover, to improve the ability of feedforward layer in modeling local patterns, we further propose to incorporate a cross-token scheme into existing point-wise feedforward layer to enable local information interaction between adjacent items. Extensive experiments are conducted on three widely-used benchmark datasets to demonstrate the effectiveness and rationality of the proposed MGT over several state-of-the-art sequential recommendation models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call