Abstract
We present a theory and architectural model of coordinated intelligent agent decision-making based upon epistemic utility theory. This architecture provides each agent with an epistemic system, which accounts for its goals and values, its beliefs, its willingness to risk error, the existence of incomplete and contradictory evidence, and the possibility that currently held beliefs are untrue. The model is broad enough to address real issues while providing sufficient detail and mathematical precision to be practically useful for the problems of estimation and control.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.