Abstract

In multi-agent systems, agents cooperate by asking other agent's opinions as part of their own decision making process. The goal of a recommendation system is to provide advice either on request or pro-actively. In this paper, we assume that the value of an advice from another agent is determined by the trust in that other agent. We present a trust-based model of collaboration and decision making in a multi-agent system. We assume however that agents may be dishonest (such an agent is called an intruder), resulting in advices that may be false. The goal of this paper is the detection of intruders, thereby minimising the damage that they can cause. We also present the underlying relational database model and use this to build a prototype. In our tests, we show that the more influential an agent is in the multi-agent system, measured as its CreditRank, the faster that agent will be unmasked would it be an intruder.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call