Abstract

Existing recommendations based on machine learning are mainly based on supervised learning. However, these methods affected by historical behavior often bring great difficulties on mining high-quality long-tail items, achieving cold-start recommendations, and causing response inability to real-time environment changes. To this end, this paper proposes a Deep Reinforcement Learning-enabled Recommendation based on Hierarchical attention and Sample-enhanced priority experience replay (HEDRL-Rec). First, we propose a hierarchical attention mechanism to extract more hidden information, including different contributions from single feature and overall feature (comprising combined feature), for enhancing features extraction ability of Actor-Critic architecture. Then, by considering the reusability of historical experiences and differences their contributions, we then propose a sample-enhanced priority experience replay mechanism to alleviate the problems of sample imbalance, sparse data, and excessive action space, where, thereby realizing personalized recommendations in real-time changing environments. Finally, we develop a deep reinforcement learning-enabled recommendation algorithm to solve the problems of non-convergence in the Critic. Extensive experiments demonstrate that, in particular, the recommended Click-Through Rate (CTR) of the HEDRL-Rec is 10.55% higher than the state-of-the-art LIst-wise Recommendation framework based on the Deep Reinforcement learning (ILRD) scheme, while the HEDRL-Rec has better stability and usability in the recommendation scenario, effectively alleviating the cold-start problem of systems lacking manual annotation data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call