Abstract

Finding the equilibrium strategy of agents is one of the central problems in game theory. Perhaps equally intriguing is the inverse of the above problem: from the available finite set of actions at equilibrium, how can we learn the utilities of players competing against each other and eventually use the learned models to predict their future actions? Instead of following an estimate-then-predict approach, this work proposes a decision-focused learning (DFL) method that directly learns the utility function to improve prediction accuracy. The game's equilibrium is represented as a layer and integrated into an end-to-end optimization framework. We discuss the statistical bounds of covering numbers for the set of solution functions corresponding to the solution of a generic parametric variational inequality. Also, we establish the generalization bound for the set of solution functions with respect to the smooth loss function with an improved rate. Moreover, we proposed an algorithm based on the iterative differentiation strategy to forward and backpropagate through the equilibrium layer. The convergence analysis of the proposed algorithm is established. Finally, We numerically validate the proposed framework in the utility learning problem among the agents whose utility functions are approximated by partially input convex neural networks (PICNN).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call