Abstract

Popular recommendation algorithms such as k-nearest neighbors and Slope One are appropriate for different situations. In this paper, we propose an approach, which will be termed as adaptive mechanism for recommendation algorithm ensemble (AMRE). The AMRE consists of three parts, including a set of agents, a reward-function, and a roulette. First, each agent corresponds to a recommendation algorithm. It also contains a reward value to determine whether it should be retained or replaced. Second, a reward-function is designed to update the reward value according to recommendation results. Wrong recommendations bring punishments, while the right ones bring rewards. Finally, the roulette chooses another agent when the reward value is below a given threshold. The experimental results on MovieLens datasets show that the AMRE outperforms 11 well-known algorithms and two classical ensemble methods on Accuracy, Recall, and F1-measure.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call