Abstract

Recommender Systems (RS), allow users to share information about items they like or dislike and obtain, in a timely fashion, recommendations based on predictions about unseen items (physical or information goods and/or services). In this process, users' preferences are considered to be the learning target functions. We study Agent-based Recommender Systems (ARS) under the scope of online learning in Multi-Agent systems (MAS). This approach models the problem as a pool of independent cooperative predictor agents, one per each user (the masters) in the system, in situations in which each agent (the learners) faces a sequence of trials, with a prediction to make in every step, eventually getting the correct value from its master. Each learner is willing to discover the degree of similarity among the target function of its master and those of other agents' masters (i.e. preference similarity). The agent uses this information for the calculation of its own prediction task, the goal being to make as few mistakes as possible. A simple, yet effective method is introduced in order to construct a compound algorithm for each agent by combining memory-based individual prediction and online weighted-majority voting. We give a theoretical mistake bound for this algorithm that is closely related to the total loss of the best predictor agent in the pool. Finally, we conduct some experiments obtaining results that empirically support these ideas and theories.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.