Abstract

Policy gradient methods, which have powered a lot of recent success in reinforcement learning, search for an optimal policy in a parameterized policy class by performing stochastic gradient descent on the cumulative expected cost-to-go under some initial state distribution. Although widely used, these methods lack theoretical guarantees as the optimization objective is typically nonconvex even for simple control problems, and hence are understood to only converge to a stationary point. In “Global Optimality Guarantees for Policy Gradient Methods,” J. Bhandari and D. Russo identify structural properties of the underlying MDP that guarantee that despite nonconvexity, the optimization objective has no suboptimal stationary points, ensuring asymptotic convergence of policy gradient methods to globally optimal policies. Under stronger conditions, authors show the policy gradient objective to satisfy a Polyak-lojasiewicz (gradient dominance) condition that yields fast convergence rates. In addition, when some of the said conditions are relaxed, authors provide bounds on the suboptimality gap of any stationary point. The results rely on a key connection with policy iteration, a classic dynamic programming algorithm which solves a single period optimization problem at every step. The authors show how structure in the single period optimization problems solved by policy iteration translate into nice properties of the multiperiod policy gradient objective, making it amenable for first-order methods to find globally optimal solutions. The authors also instantiate their framework for several classical control problems including tabular and linear MDPs, linear quadratic control, optimal stopping, and finite-horizon inventor control problems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call