Abstract

This paper presents a machine learning strategy that tackles a distributed optimization task in a wireless network with an arbitrary number of randomly interconnected nodes. Individual nodes decide their optimal states with distributed coordination among other nodes through randomly varying backhaul links. This poses a technical challenge in distributed universal optimization policy robust to a random topology of the wireless network, which has not been properly addressed by conventional deep neural networks (DNNs) with rigid structural configurations. We develop a flexible DNN formalism termed distributed message-passing neural network (DMPNN) with forward and backward computations independent of the network topology. A key enabler of this approach is an iterative message-sharing strategy through arbitrarily connected backhaul links. The DMPNN provides a convergent solution for iterative coordination by learning numerous random backhaul interactions. The DMPNN is investigated for various configurations of the power control in wireless networks, and intensive numerical results prove its universality and viability over conventional optimization and DNN approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call