Abstract

We present a theory and architectural model of coordinated intelligent agent decision-making based upon epistemic utility theory. This architecture provides each agent with an epistemic system, which accounts for its goals and values, its beliefs, its willingness to risk error, the existence of incomplete and contradictory evidence, and the possibility that currently held beliefs are untrue. The model is broad enough to address real issues while providing sufficient detail and mathematical precision to be practically useful for the problems of estimation and control.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call