Abstract

Logical theories for representing knowledge are often plagued by the so-called Logical Omniscience Problem. The problem stems from the clash between the desire to model rational agents, which should be capable of simple logical inferences, and the fact that any logical inference, however complex, almost inevitably consists of inference steps that are simple enough. This contradiction points to the fruitlessness of trying to solve the Logical Omniscience Problem qualitatively if the rationality of agents is to be maintained. We provide a quantitative solution to the problem compatible with the two important facets of the reasoning agent: rationality and resource boundedness. More precisely, we provide a test for the logical omniscience problem in a given formal theory of knowledge. The quantitative measures we use are inspired by the complexity theory. We illustrate our framework with a number of examples ranging from the traditional implicit representation of knowledge in modal logic to the language of justification logic, which is capable of spelling out the internal inference process. We use these examples to divide representations of knowledge into logically omniscient and not logically omniscient, thus trying to determine how much information about the reasoning process needs to be present in a theory to avoid logical omniscience.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.