Abstract

During a dialogue, agents exchange information with each other and need thus to deal with incoming information. For that purpose, they should be able to reason effectively about trustworthiness of information sources. This paper proposes an argument-based system that allows an agent to reason about its own beliefs and information received from other sources. An agent's beliefs are of two kinds: beliefs about the environment (like the window is closed) and beliefs about trusting sources (like agent i trusts agent j). Six basic forms of trust are discussed in the paper including the most common one on sincerity. Starting with a base which contains such information, the system builds two types of arguments: arguments in favour of trusting a given source of information and arguments in favour of believing statements which may be received from other agents. We discuss how the different arguments interact and how an agent may decide to trust another source and thus to accept information coming from that source....

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call