Abstract

Inverse optimization (Inverse optimal control) is the task of imputing a cost function such that given test points (trajectories) are (nearly) optimal with respect to the discovered cost. Prior methods in inverse optimization assume that the true cost is a convex combination of a set of convex basis functions and that this basis is consistent with the test points. However, the consistency assumption is not always justified, as in many applications the principles by which the data is generated are not well understood. This work proposes using the distance between a test point and the set of global optima generated by the convex combinations of the convex basis functions as a measurement for the expressive quality of the basis with respect to the test point. A large minimal distance invalidates the set of basis functions. The concept of a set of global optima is introduced and its properties are explored in unconstrained and constrained settings. Upper and lower bounds for the minimum distance in the convex quadratic setting are implemented by bi-level gradient descent and an enriched linear matrix inequality respectively. Extensions to this framework include max-representable basis functions, nonconvex basis functions (local minima), and applying polynomial optimization techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call