Abstract

This letter considers the robustness of game-theoretic approaches to distributed submodular maximization problems, which have been used to model a wide variety of applications such as competitive facility location, distributed sensor coverage, and routing in transportation networks. Recent work showed that in this class of games, if <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula> agents suffer a technical fault and cannot observe the actions of other agents, Nash equilibria are still guaranteed to be within a factor of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$k+2$ </tex-math></inline-formula> of optimal. However, our paper shows that at a Nash equilibrium with a very low objective function value, the total payoffs of compromised agents are very close to the payoffs they would receive at an optimal allocation. At the extreme worst-case equilibria, all agents are perfectly indifferent between their equilibrium and optimal action; hence, the equilibria have low stability. Conversely, we show that if agents’ equilibrium payoffs are much higher than their optimal-allocation payoffs (i.e., the equilibrium is “stable”), then this ensures that the equilibrium must be of relatively high quality. To demonstrate how this phenomenon may be exploited algorithmically, we perform simulations using the log-linear learning algorithm and show that average performance on worst-case instances is far better even than our improved analytical guarantees.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call