In a typical (forward) optimization problem, a decision-maker uses given values of model parameters to compute the values of decision variables. The goal in inverse optimization (IO) is instead to infer parameters that render given values of decision variables optimal. Most papers on IO utilize duality to impute objective function parameters. A corresponding literature for imputing constraint parameters is essentially non-existent, even for linear programs. The difficulty is that these IO problems include nonconvex bilinear constraints and/or objectives. We study inverse semi-definite programs (SDPs) where matrices on the left-hand-sides of constraints are unknown. These unknown matrices must satisfy some side constraints. We consider two types of such problems, depending on which SDP decision variable values are given: primal or dual. In each situation, we study three sub-cases based on whether both the right-hand-side coefficients and the objective function matrix are known or only one of these two is known. This creates six variants of the inverse problem. For each variant, we first look for sufficient conditions, either on the side constraints or on problem structure, under which the nonconvex bilinear inverse problem can be simplified substantially. In particular, it either has a trivial solution or can be reformulated as a (set of) convex program(s). When such conditions are not evident, we propose three tailored heuristics called the Convex–Concave Procedure, Sequential Semi-definite Programming, and Alternate Convex Search. We perform computational experiments to gain insights into the performance of these three methods.
Read full abstract