Abstract

Toward better approximation for the minimum-weighted vertex cover (MWVC) problem in multiagent systems, we present a distributed algorithm from the perspective of learning in games. For self-organized coordination and optimization, we see each vertex as a potential game player who makes decisions using local information of its own and the immediate neighbors. The resulting Nash equilibrium is classified into two categories, i.e., the inferior Nash equilibrium (INE) and the dominant Nash equilibrium (DNE). We show that the optimal solution must be a DNE. To achieve better approximation ratios, local rules of perturbation and weighted memory are designed, with the former destroying the stability of an INE and the latter facilitating the refinement of a DNE. By showing the existence of an improvement path from any INE to a DNE, we prove that when the memory length is larger than 1, our algorithm converges in finite time to DNEs, which could not be improved by exchanging the action of a selected node with all its unselected neighbors. Moreover, additional freedom for solution efficiency refinement is provided by increasing the memory length. Finally, intensive comparison experiments demonstrate the superiority of the presented methodology to the state of the art, both in solution efficiency and computation speed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call