Optimizing economic and public policy is critical to address socioeconomic issues and trade-offs, e.g., improving equality, productivity, or wellness, and poses a complex mechanism design problem.A policy designer needs to consider multiple objectives, policy levers, and behavioral responses from strategic actors who optimize for their individual objectives.Moreover, real-world policies should be explainable and robust to simulation-to-reality gaps, e.g., due to calibration issues.Existing approaches are often limited to a narrow set of policy levers or objectives that are hard to measure, do not yield explicit optimal policies, or do not consider strategic behavior, for example.Hence, it remains challenging to optimize policy in real-world scenarios.Here we show that the AI Economist framework enables effective, flexible, and interpretable policy design using two-level reinforcement learning (RL) and data-driven simulations.We validate our framework on optimizing the stringency of \USState{} policies and Federal subsidies during a pandemic, e.g., COVID-19, using a simulation fitted to real data.We find that log-linear policies trained using RL significantly improve social welfare, based on both public health and economic outcomes, compared to past outcomes.Their behavior can be explained, e.g., well-performing policies respond strongly to changes in recovery and vaccination rates.They are also robust to calibration errors, e.g., infection rates that are over or underestimated.As of yet, real-world policymaking has not seen adoption of machine learning methods at large, including RL and AI-driven simulations.Our results show the potential of AI to guide policy design and improve social welfare amidst the complexity of the real world.
Read full abstract