Abstract

The search problem of computing a Stackelberg (or leader-follower)equilibrium (also referred to as an optimal strategy to commit to) has been widely investigated in the scientific literature in, almost exclusively, the single-follower setting. Although the optimistic and pessimistic versions of the problem, i.e., those where the single follower breaks any ties among multiple equilibria either in favour or against the leader, are solved with different methodologies, both cases allow for efficient, polynomial-time algorithms based on linear programming. The situation is different with multiple followers, where results are only sporadic and depend strictly on the nature of the followers’ game. In this paper, we investigate the setting of a normal-form game with a single leader and multiple followers who, after observing the leader’s commitment, play a Nash equilibrium. When both leader and followers are allowed to play mixed strategies, the corresponding search problem, both in the optimistic and pessimistic versions, is known to be inapproximable in polynomial time to within any multiplicative polynomial factor unless textsf {P}=textsf {NP}. Exact algorithms are known only for the optimistic case. We focus on the case where the followers play pure strategies—a restriction that applies to a number of real-world scenarios and which, in principle, makes the problem easier—under the assumption of pessimism (the optimistic version of the problem can be straightforwardly solved in polynomial time). After casting this search problem (with followers playing pure strategies) as a pessimistic bilevel programming problem, we show that, with two followers, the problem is NP-hard and, with three or more followers, it cannot be approximated in polynomial time to within any multiplicative factor which is polynomial in the size of the normal-form game, nor, assuming utilities in [0, 1], to within any constant additive loss stricly smaller than 1 unless textsf {P}=textsf {NP}. This shows that, differently from what happens in the optimistic version, hardness and inapproximability in the pessimistic problem are not due to the adoption of mixed strategies. We then show that the problem admits, in the general case, a supremum but not a maximum, and we propose a single-level mathematical programming reformulation which asks for the maximization of a nonconcave quadratic function over an unbounded nonconvex feasible region defined by linear and quadratic constraints. Since, due to admitting a supremum but not a maximum, only a restricted version of this formulation can be solved to optimality with state-of-the-art methods, we propose an exact ad hoc algorithm (which we also embed within a branch-and-bound scheme) capable of computing the supremum of the problem and, for cases where there is no leader’s strategy where such value is attained, also an alpha -approximate strategy where alpha > 0 is an arbitrary additive loss (at most as large as the supremum). We conclude the paper by evaluating the scalability of our algorithms via computational experiments on a well-established testbed of game instances.

Highlights

  • In recent years, Stackelberg Games (SGs) and their corresponding Stackelberg Equilibria (SEs) have attracted a growing interest in many disciplines, including theoretical computer science, artificial intelligence, and operations research

  • This paper extends the complexity results by studying the inapproximability of the problem (Sect. 4), introduces and analyses a single-level Quadratically Constrained Quadratic Program (QCQP) reformulation and an Mixed-Integer Linear Program (MILP) restriction of it (Sect. 5), substantially extends the mathematical details needed to establish the correctness of our algorithms, illustrating their step-by-step execution on an example (Sect. 6 and Appendix A), and it reports on an extensive set of computational results carried out to validate our methods (Sect. 7). 3 In this case, the leader and the follower play correlated strategies under rationality constraints imposed on the follower only, maximizing the leader’s expected utility

  • – Time: average computing time, in seconds. – LB: average value of the best feasible solution found. – Gap: average additive gap measured as UB − LB, where UB is the upper bound returned by the algorithm.6 – Opt: percentage of instances solved to optimality. – Feas: percentage of instances for which a feasible solution has been found

Read more

Summary

Introduction

Stackelberg (or Leader-Follower) Games (SGs) and their corresponding Stackelberg Equilibria (SEs) have attracted a growing interest in many disciplines, including theoretical computer science, artificial intelligence, and operations research. As to breaking ties among multiple equilibria, it is natural to consider two cases: the optimistic one (often called strong SE), where the followers end up playing an equilibrium which maximizes the leader’s utility, and the pessimistic one (often called weak SE), where the followers end up playing an equilibrium by which the leader’s utility is minimized. This distinction is customary in the literature since the seminal paper on SEs with mixed-strategy commitments by Von Stengel and Zamir [34]. Though, this degree of robustness comes at a high computational cost, as computing a pessimistic SE is a much harder task than computing its optimistic counterpart

Stackelberg Nash Equilibria
Original Contributions
Paper Outline
Previous Works
Notation
The Problem and Its Formulation
The Optimistic Case
The Pessimistic Case
Some Preliminary Results
Computational Complexity
NP-Completeness
Inapproximability
Single-Level Reformulation and Restriction
Single-Level Reformulation
Exact Algorithm
Enumerative Algorithm
Finding an-Approximate Strategy
Outline of the Explicit Enumeration Algorithm
On The Polynomial Representability of P-SPNEs
Branch-and-Bound Algorithm
Outline of the Branch-and-Bound Algorithm
Experimental Evaluation
Experimental Results with Two Followers
Experimental Results with More Followers and Final Observations
Conclusions and Future Works
A Illustration of the Algorithms
Illustration of the Branch-and-Bound Algorithm
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call