Decision analysis and risk analysis have grown up around a set of organizing questions: what might go wrong, how likely is it to do so, how bad might the consequences be, what should be done to maximize expected utility and minimize expected loss or regret, and how large are the remaining risks? In probabilistic causal models capable of representing unpredictable and novel events, probabilities for what will happen, and even what is possible, cannot necessarily be determined in advance. Standard decision and risk analysis questions become inherently unanswerable ("undecidable") for realistically complex causal systems with "open-world" uncertainties about what exists, what can happen, what other agents know, and how they will act. Recent artificial intelligence (AI) techniques enable agents (e.g., robots, drone swarms, and automatic controllers) to learn, plan, and act effectively despite open-world uncertainties in a host of practical applications, from robotics and autonomous vehicles to industrial engineering, transportation and logistics automation, and industrial process control. This article offers an AI/machine learning perspective on recent ideas for making decision and risk analysis (even) more useful. It reviews undecidability results and recent principles and methods for enabling intelligent agents to learn what works and how to complete useful tasks, adjust plans as needed, and achieve multiple goals safely and reasonably efficiently when possible, despite open-world uncertainties and unpredictable events. In the near future, these principles could contribute to the formulation and effective implementation of more effective plans and policies in business, regulation, and public policy, as well as in engineering, disaster management, and military and civil defense operations. They can extend traditional decision and risk analysis to deal more successfully with open-world novelty and unpredictable events in large-scale real-world planning, policymaking, and risk management.
Read full abstract