Abstract

My research seeks to answer the question of how any agent that is tasked with making sense of its world, by finding explanations for evidence (e.g., sensor reports) using domain-general strategies, may accurately and efficiently handle incomplete evidence, noisy evidence, and an incomplete knowledge base. I propose the following answer to the question. The agent should employ an optimal abductive reasoning algorithm (developed piece-wise and shown to be best in a class of similar algorithms) that allows it to reason from evidence to causes. For the sake of efficiency and operational concerns, the agent should establish beliefs periodically rather than waiting until it has obtained all evidence it will ever be able to obtain. If the agent commits to beliefs on the basis of incomplete or noisy evidence or an incomplete knowledge base, these beliefs may be incorrect. Future evidence obtained by the agent may result in failed predictions or anomalies. The agent is then tasked with determining whether it should retain its beliefs and therefore discount the newly-obtained evidence, revise its prior beliefs, or expand its knowledge base (what can be described as anomaly-driven or explanation-based learning). I have developed an abductive metareasoning procedure that aims to appropriately reason about these situations. Preliminary experiments in two reasoning tasks indicate that the procedure is effective.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call