Abstract

The topic of learning to solve optimization problems has received interest from both the operations research and machine learning communities. In this paper, we combine ideas from both fields to address the problem of learning to generate decisions to instances of optimization problems with potentially nonlinear or nonconvex constraints where the feasible set varies with contextual features. We propose a novel framework for training a generative model to produce provably optimal decisions by combining interior point methods and adversarial learning, which we further embed within an iterative data generation algorithm. To this end, we first train a classifier to learn feasibility and then train the generative model to produce optimal decisions to an optimization problem using the classifier as a regularizer. We prove that decisions generated by our model satisfy in-sample and out-of-sample optimality guarantees. Furthermore, the learning models are embedded in an active learning loop in which synthetic instances are iteratively added to the training data; this allows us to progressively generate provably tighter optimal decisions. We investigate case studies in portfolio optimization and personalized treatment design, demonstrating that our approach yields advantages over predict-then-optimize and supervised deep learning techniques, respectively. In particular, our framework is more robust to parameter estimation error compared with the predict-then-optimize paradigm and can better adapt to domain shift as compared with supervised learning models. This paper was accepted by Chung Piaw Teo, optimization. Funding: This work was supported in part by the Natural Sciences and Engineering Research Council of Canada. Supplemental Material: The online appendix and data files are available at https://doi.org/10.1287/mnsc.2020.03565 .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call