Abstract

The capability of revising its beliefs upon new information in a rational and efficient way is crucial for an intelligent agent. The classical work in belief revision focuses on idealized models and is not concerned with computational aspects. In particular, many researchers are interested in the logical properties (e.g. the AGM postulates) that a rational revision operator should possess. For the implementation of belief revision, however, one has to consider that any realistic agent is a finite being and that calculations take time. In this article, we introduce a new operation for revising beliefs which we call reinforcement belief revision. The computational model for this operation allows us to assess it in terms of time and space consumption. Moreover, the operation is proved equivalent to a (semantical) model based on the concept of possible worlds, which facilitates showing that reinforcement belief revision satisfies all desirable rationality postulates.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call