Abstract

A Session-based recommender system (SBRS) captures the dynamic behavior of a user to provide recommendations for the next item in the current session. On providing the user’s past interactions of ongoing sessions, the SBRS predicts the next item that a user is likely to interact with. Sessions can vary in duration, from minutes to hours. Many recommender systems prioritize longer sessions, but most datasets have more short sessions. Predicting the next item in short sessions is challenging due to limited context. Additionally, obtaining item embeddings is problematic due to the data sparsity issue in most SBRS, as they rely on one-hot encoding. A long short-term memory (LSTM) with an attention mechanism has been proposed to overcome the abovementioned issues by utilizing LSTM to capture sequential context and incorporating an attention mechanism to focus on the target items. Additionally, to overcome the data sparsity problem, the Word2Vec embedding technique has been used. The proposed model was tested on two publicly available datasets i.e., 30Music and RSC19, and results are compared with basic sequence models i.e., RNN and LSTM. LSTM achieved a 41.95% hit rate on the 30Music, while LSTM-Attention achieved 81.47% on RSC19. In summary, LSTM outperformed RNN and LSTM-Attention on 30Music, whereas LSTM with attention outperformed the other models on RSC19.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call