Abstract

This paper concerns the representation of introspective belief and knowledge in multi-agent systems. An introspective agent is an agent that has the ability to refer to itself and reason about its own beliefs. It is well-known that representing introspective beliefs is theoretically very problematic. An agent which is given strong introspective abilities is most likely to have inconsistent beliefs, since it can use introspection to express self-referential beliefs that are paradoxical in the same way as the classical paradoxes of self-reference. In multi-agent systems these paradoxical beliefs can even be expressed as beliefs about the correctness and completeness of other agents' beliefs, i.e., even without the presence of explicit introspection. In this paper we explore the maximal sets of introspective beliefs that an agent can consistently obtain and retain when situated in a dynamic environment, and when treating beliefs “syntactically” (that is, formalizing beliefs as axioms of first-order predicate logic rather than using modal formalisms). We generalize some previous results by Perlis [1985] and des Rivières & Levesque [1988].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call