Abstract

We consider the problem of n agents that share m common resources. The objective is to derive an optimal allocation that maximizes a global objective expressed as a separable concave objective function. We propose a decentralized, asynchronous gradient-descent method that is suitable for implementation in the case where the communication between agents is described in terms of a dynamic network. This communication model accommodates situations such as mobile agents and communication failures. The method is shown to converge provided that the objective function has Lipschitz-continuous gradients. We further consider a randomized version of the same algorithm for the case where the objective function is nondifferentiable but has bounded subgradients. We show that both algorithms converge to near-optimal solutions and derive convergence rates in terms of the magnitude of the gradient of the objective function. We show how to accommodate nonnegativity constraints on the resources using the results derived. Experimental results with the problems of varying dimensions suggest that the algorithms are competitive with centralized approaches and scale well with problem size.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call