Abstract

Since the weight of a self-attention model is not affected by the sequence interval, it can more accurately and completely describe the user interests, so it is widely used in processing sequential recommendation. However, the mainstream self-attention model focuses on the similarity between items when calculating the attention weight of user behavioral patterns but fails to reflect the impact of user sudden drift decisions on the model in time. In this article, we introduce a bias strategy in the self-attention module, referred to as Learning Self-Attention Bias (LSAB) to more accurately learn the fast-changing user behavioral patterns. The introduction of LSAB allows for the adjustment of bias resulting from self-attention weights, leading to enhanced prediction performance in sequential recommendation. In addition, this article designs four attention-weight bias types catering to diverse user behavior preferences. After testing on the benchmark datasets, each bias strategy in LSAB is useful for state-of-the-art and can improve the performance of the models by nearly 5% on average. The source code listing is publicly available at https://gitee.com/kyle-liao/lsab .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call