Abstract

This article presents a novel methodology for tractably solving optimal control and offline reinforcement learning (RL) problems for high-dimensional systems. This work is motivated by the ongoing challenges of safety, computation, and optimality in high-dimensional optimal control. We address these key questions with the following approach. First, we identify a sequence-modeling surrogate methodology that takes as input the initial state and a time series of control inputs and outputs an approximation of the objective function and trajectories of constraint functions. Importantly this approach entirely absorbs the individual state transition dynamics. The sole dependence on the initial state means we can apply dimensionality reduction to compress the model input while retaining most of its information. Uncertainty in the surrogate objective will affect the resulting optimality. Critically, however, uncertainty in the surrogate constraint functions will lead to infeasibility, i.e., unsafe actions. When considering offline RL, the most significant modeling errors will be encountered on out-of-distribution (OOD) data. Therefore, we apply Wasserstein ambiguity sets to “robustify” our surrogate modeling approach subject to worst case out-of-sample modeling errors based on the distribution of test data residuals. We demonstrate the efficacy of this combined approach through a case study of safe optimal fast charging of a high-dimensional lithium-ion battery model at low temperatures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call